Dec 11 08:22:54 crc systemd[1]: Starting Kubernetes Kubelet... Dec 11 08:22:54 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:54 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 08:22:55 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 11 08:22:55 crc kubenswrapper[4992]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 08:22:55 crc kubenswrapper[4992]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 11 08:22:55 crc kubenswrapper[4992]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 08:22:55 crc kubenswrapper[4992]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 08:22:55 crc kubenswrapper[4992]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 11 08:22:55 crc kubenswrapper[4992]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.902408 4992 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909274 4992 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909295 4992 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909301 4992 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909306 4992 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909312 4992 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909316 4992 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909321 4992 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909325 4992 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909332 4992 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909338 4992 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909344 4992 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909348 4992 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909353 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909358 4992 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909363 4992 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909367 4992 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909372 4992 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909376 4992 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909380 4992 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909384 4992 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909388 4992 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909392 4992 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909396 4992 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909400 4992 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909404 4992 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909409 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909413 4992 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909417 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909421 4992 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909425 4992 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909430 4992 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909434 4992 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909439 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909443 4992 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909448 4992 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909452 4992 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909456 4992 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909460 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909464 4992 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909469 4992 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909473 4992 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909477 4992 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909484 4992 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909491 4992 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909495 4992 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909500 4992 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909506 4992 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909511 4992 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909515 4992 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909519 4992 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909524 4992 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909528 4992 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909532 4992 feature_gate.go:330] unrecognized feature gate: Example Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909536 4992 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909540 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909544 4992 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909548 4992 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909552 4992 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909558 4992 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909562 4992 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909566 4992 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909571 4992 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909575 4992 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909579 4992 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909593 4992 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909598 4992 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909602 4992 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909607 4992 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909612 4992 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909618 4992 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.909622 4992 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909724 4992 flags.go:64] FLAG: --address="0.0.0.0" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909734 4992 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909743 4992 flags.go:64] FLAG: --anonymous-auth="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909756 4992 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909762 4992 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909769 4992 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909777 4992 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909783 4992 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909789 4992 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909795 4992 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909801 4992 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909807 4992 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909812 4992 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909817 4992 flags.go:64] FLAG: --cgroup-root="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909821 4992 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909826 4992 flags.go:64] FLAG: --client-ca-file="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909831 4992 flags.go:64] FLAG: --cloud-config="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909836 4992 flags.go:64] FLAG: --cloud-provider="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909841 4992 flags.go:64] FLAG: --cluster-dns="[]" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909846 4992 flags.go:64] FLAG: --cluster-domain="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909851 4992 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909856 4992 flags.go:64] FLAG: --config-dir="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909861 4992 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909866 4992 flags.go:64] FLAG: --container-log-max-files="5" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909872 4992 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909877 4992 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909883 4992 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909888 4992 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909894 4992 flags.go:64] FLAG: --contention-profiling="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909898 4992 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909904 4992 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909909 4992 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909915 4992 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909922 4992 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909927 4992 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909934 4992 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909939 4992 flags.go:64] FLAG: --enable-load-reader="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909944 4992 flags.go:64] FLAG: --enable-server="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909949 4992 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909956 4992 flags.go:64] FLAG: --event-burst="100" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909960 4992 flags.go:64] FLAG: --event-qps="50" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909965 4992 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909970 4992 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909975 4992 flags.go:64] FLAG: --eviction-hard="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909981 4992 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909986 4992 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909991 4992 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.909997 4992 flags.go:64] FLAG: --eviction-soft="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910003 4992 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910008 4992 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910012 4992 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910018 4992 flags.go:64] FLAG: --experimental-mounter-path="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910022 4992 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910028 4992 flags.go:64] FLAG: --fail-swap-on="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910033 4992 flags.go:64] FLAG: --feature-gates="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910039 4992 flags.go:64] FLAG: --file-check-frequency="20s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910044 4992 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910049 4992 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910054 4992 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910059 4992 flags.go:64] FLAG: --healthz-port="10248" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910064 4992 flags.go:64] FLAG: --help="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910069 4992 flags.go:64] FLAG: --hostname-override="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910074 4992 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910079 4992 flags.go:64] FLAG: --http-check-frequency="20s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910086 4992 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910091 4992 flags.go:64] FLAG: --image-credential-provider-config="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910097 4992 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910105 4992 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910110 4992 flags.go:64] FLAG: --image-service-endpoint="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910115 4992 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910121 4992 flags.go:64] FLAG: --kube-api-burst="100" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910126 4992 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910132 4992 flags.go:64] FLAG: --kube-api-qps="50" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910137 4992 flags.go:64] FLAG: --kube-reserved="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910143 4992 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910148 4992 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910153 4992 flags.go:64] FLAG: --kubelet-cgroups="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910158 4992 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910164 4992 flags.go:64] FLAG: --lock-file="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910169 4992 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910174 4992 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910180 4992 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910193 4992 flags.go:64] FLAG: --log-json-split-stream="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910199 4992 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910204 4992 flags.go:64] FLAG: --log-text-split-stream="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910209 4992 flags.go:64] FLAG: --logging-format="text" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910215 4992 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910221 4992 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910226 4992 flags.go:64] FLAG: --manifest-url="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910231 4992 flags.go:64] FLAG: --manifest-url-header="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910238 4992 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910244 4992 flags.go:64] FLAG: --max-open-files="1000000" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910252 4992 flags.go:64] FLAG: --max-pods="110" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910258 4992 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910263 4992 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910269 4992 flags.go:64] FLAG: --memory-manager-policy="None" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910275 4992 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910280 4992 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910286 4992 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910293 4992 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910305 4992 flags.go:64] FLAG: --node-status-max-images="50" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910310 4992 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910315 4992 flags.go:64] FLAG: --oom-score-adj="-999" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910320 4992 flags.go:64] FLAG: --pod-cidr="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910325 4992 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910334 4992 flags.go:64] FLAG: --pod-manifest-path="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910339 4992 flags.go:64] FLAG: --pod-max-pids="-1" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910344 4992 flags.go:64] FLAG: --pods-per-core="0" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910349 4992 flags.go:64] FLAG: --port="10250" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910355 4992 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910360 4992 flags.go:64] FLAG: --provider-id="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910365 4992 flags.go:64] FLAG: --qos-reserved="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910371 4992 flags.go:64] FLAG: --read-only-port="10255" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910376 4992 flags.go:64] FLAG: --register-node="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910382 4992 flags.go:64] FLAG: --register-schedulable="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910387 4992 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910395 4992 flags.go:64] FLAG: --registry-burst="10" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910401 4992 flags.go:64] FLAG: --registry-qps="5" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910406 4992 flags.go:64] FLAG: --reserved-cpus="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910411 4992 flags.go:64] FLAG: --reserved-memory="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910418 4992 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910423 4992 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910428 4992 flags.go:64] FLAG: --rotate-certificates="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910433 4992 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910438 4992 flags.go:64] FLAG: --runonce="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910443 4992 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910449 4992 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910454 4992 flags.go:64] FLAG: --seccomp-default="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910459 4992 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910465 4992 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910470 4992 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910479 4992 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910484 4992 flags.go:64] FLAG: --storage-driver-password="root" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910489 4992 flags.go:64] FLAG: --storage-driver-secure="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910494 4992 flags.go:64] FLAG: --storage-driver-table="stats" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910499 4992 flags.go:64] FLAG: --storage-driver-user="root" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910504 4992 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910509 4992 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910514 4992 flags.go:64] FLAG: --system-cgroups="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910519 4992 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910527 4992 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910532 4992 flags.go:64] FLAG: --tls-cert-file="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910536 4992 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910542 4992 flags.go:64] FLAG: --tls-min-version="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910547 4992 flags.go:64] FLAG: --tls-private-key-file="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910551 4992 flags.go:64] FLAG: --topology-manager-policy="none" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910556 4992 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910561 4992 flags.go:64] FLAG: --topology-manager-scope="container" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910566 4992 flags.go:64] FLAG: --v="2" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910572 4992 flags.go:64] FLAG: --version="false" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910578 4992 flags.go:64] FLAG: --vmodule="" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910584 4992 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.910590 4992 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910728 4992 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910734 4992 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910739 4992 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910744 4992 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910750 4992 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910754 4992 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910759 4992 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910763 4992 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910769 4992 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910774 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910782 4992 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910788 4992 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910794 4992 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910799 4992 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910804 4992 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910808 4992 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910812 4992 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910816 4992 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910821 4992 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910825 4992 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910829 4992 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910833 4992 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910837 4992 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910843 4992 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910849 4992 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910854 4992 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910858 4992 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910862 4992 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910867 4992 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910871 4992 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910875 4992 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910880 4992 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910884 4992 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910888 4992 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910892 4992 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910897 4992 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910901 4992 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910905 4992 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910914 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910918 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910924 4992 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910928 4992 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910935 4992 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910940 4992 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910944 4992 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910948 4992 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910952 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910956 4992 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910961 4992 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910965 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910969 4992 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910973 4992 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910977 4992 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910982 4992 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910986 4992 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910991 4992 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.910997 4992 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911002 4992 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911008 4992 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911013 4992 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911018 4992 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911022 4992 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911027 4992 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911032 4992 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911037 4992 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911041 4992 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911045 4992 feature_gate.go:330] unrecognized feature gate: Example Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911050 4992 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911054 4992 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911058 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.911062 4992 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.911077 4992 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.921100 4992 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.921160 4992 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921295 4992 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921308 4992 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921317 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921329 4992 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921338 4992 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921347 4992 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921357 4992 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921368 4992 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921379 4992 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921388 4992 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921399 4992 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921409 4992 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921419 4992 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921427 4992 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921436 4992 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921445 4992 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921453 4992 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921460 4992 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921469 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921476 4992 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921484 4992 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921491 4992 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921499 4992 feature_gate.go:330] unrecognized feature gate: Example Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921507 4992 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921515 4992 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921523 4992 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921530 4992 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921538 4992 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921546 4992 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921555 4992 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921562 4992 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921571 4992 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921579 4992 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921587 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921597 4992 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921604 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921612 4992 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921620 4992 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921628 4992 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921663 4992 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921671 4992 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921679 4992 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921686 4992 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921694 4992 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921702 4992 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921710 4992 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921717 4992 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921725 4992 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921733 4992 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921741 4992 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921749 4992 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921756 4992 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921764 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921772 4992 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921779 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921787 4992 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921797 4992 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921809 4992 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921818 4992 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921828 4992 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921839 4992 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921848 4992 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921856 4992 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921867 4992 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921875 4992 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921883 4992 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921891 4992 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921898 4992 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921906 4992 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921914 4992 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.921924 4992 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.921938 4992 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922195 4992 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922225 4992 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922236 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922247 4992 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922257 4992 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922267 4992 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922278 4992 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922288 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922302 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922312 4992 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922322 4992 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922331 4992 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922339 4992 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922348 4992 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922358 4992 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922369 4992 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922380 4992 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922390 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922405 4992 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922419 4992 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922430 4992 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922441 4992 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922451 4992 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922459 4992 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922468 4992 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922476 4992 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922483 4992 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922491 4992 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922498 4992 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922509 4992 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922519 4992 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922528 4992 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922536 4992 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922545 4992 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922555 4992 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922564 4992 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922573 4992 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922581 4992 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922589 4992 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922597 4992 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922605 4992 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922613 4992 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922621 4992 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922630 4992 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922675 4992 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922685 4992 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922693 4992 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922703 4992 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922715 4992 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922724 4992 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922732 4992 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922741 4992 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922750 4992 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922758 4992 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922768 4992 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922778 4992 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922787 4992 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922795 4992 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922804 4992 feature_gate.go:330] unrecognized feature gate: Example Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922812 4992 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922819 4992 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922828 4992 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922835 4992 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922843 4992 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922851 4992 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922858 4992 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922866 4992 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922875 4992 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922882 4992 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922890 4992 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 08:22:55 crc kubenswrapper[4992]: W1211 08:22:55.922899 4992 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.922912 4992 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.923254 4992 server.go:940] "Client rotation is on, will bootstrap in background" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.928019 4992 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.928249 4992 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.929318 4992 server.go:997] "Starting client certificate rotation" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.929358 4992 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.929855 4992 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-13 21:57:08.598585795 +0000 UTC Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.929972 4992 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 61h34m12.668618629s for next certificate rotation Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.941208 4992 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.945501 4992 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.965289 4992 log.go:25] "Validated CRI v1 runtime API" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.985275 4992 log.go:25] "Validated CRI v1 image API" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.987432 4992 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.990714 4992 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-11-08-18-39-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 11 08:22:55 crc kubenswrapper[4992]: I1211 08:22:55.990743 4992 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.004481 4992 manager.go:217] Machine: {Timestamp:2025-12-11 08:22:56.003322904 +0000 UTC m=+0.262796850 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ccee3d03-425b-471b-a150-1d5509fbd062 BootID:ecae5cb9-c8d1-4572-b769-63cb0d588631 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5a:1a:00 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5a:1a:00 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:28:e2:bb Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:11:c4:a7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2f:7f:13 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d9:de:5b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d2:43:5f:3d:50:19 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:23:aa:81:8a:57 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.004712 4992 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.004875 4992 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.005395 4992 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.005566 4992 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.005654 4992 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.006213 4992 topology_manager.go:138] "Creating topology manager with none policy" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.006230 4992 container_manager_linux.go:303] "Creating device plugin manager" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.006350 4992 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.006378 4992 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.006530 4992 state_mem.go:36] "Initialized new in-memory state store" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.006612 4992 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.007352 4992 kubelet.go:418] "Attempting to sync node with API server" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.007373 4992 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.007387 4992 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.007401 4992 kubelet.go:324] "Adding apiserver pod source" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.007412 4992 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.009222 4992 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.009656 4992 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.010937 4992 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011405 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011429 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011437 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011444 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011456 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011464 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011472 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011483 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011492 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011500 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011510 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011518 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.011920 4992 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.012287 4992 server.go:1280] "Started kubelet" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.013937 4992 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.016016 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.016119 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.105:6443: connect: connection refused" logger="UnhandledError" Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.014180 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.016316 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.105:6443: connect: connection refused" logger="UnhandledError" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.013955 4992 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.016776 4992 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.016974 4992 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:56 crc systemd[1]: Started Kubernetes Kubelet. Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.021563 4992 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.023053 4992 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.023541 4992 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:54:24.457259576 +0000 UTC Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.023011 4992 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.105:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18801b93e8fcd7e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 08:22:56.012261349 +0000 UTC m=+0.271735275,LastTimestamp:2025-12-11 08:22:56.012261349 +0000 UTC m=+0.271735275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.023903 4992 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.023985 4992 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.023909 4992 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.024193 4992 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.024511 4992 server.go:460] "Adding debug handlers to kubelet server" Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.024513 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="200ms" Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.024758 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.025040 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.105:6443: connect: connection refused" logger="UnhandledError" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.026013 4992 factory.go:55] Registering systemd factory Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.026067 4992 factory.go:221] Registration of the systemd container factory successfully Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.026410 4992 factory.go:153] Registering CRI-O factory Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.026448 4992 factory.go:221] Registration of the crio container factory successfully Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.026532 4992 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.026577 4992 factory.go:103] Registering Raw factory Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.026598 4992 manager.go:1196] Started watching for new ooms in manager Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.027410 4992 manager.go:319] Starting recovery of all containers Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040520 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040618 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040700 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040739 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040763 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040788 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040817 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040841 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040875 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040900 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040924 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040955 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.040977 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041010 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041030 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041058 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041079 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041105 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041131 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041174 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041223 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041253 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041288 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041314 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041334 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041363 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041392 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041424 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041453 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041474 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041493 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041521 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041541 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041567 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041589 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041613 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041698 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041768 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041798 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041828 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041850 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041880 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041911 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041933 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041960 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.041984 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042013 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042034 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042059 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042086 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042108 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042133 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042164 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042201 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042231 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042263 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042295 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042318 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042346 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042367 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042396 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042463 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042485 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042516 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042537 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042557 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042585 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042606 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042658 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042708 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042731 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042761 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042781 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042809 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042829 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042848 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042876 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042898 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042925 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042946 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042969 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.042998 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043020 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043049 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043071 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043092 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043121 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043143 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043171 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043191 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043213 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043242 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.043263 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045203 4992 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045273 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045294 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045313 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045330 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045353 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045368 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045384 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045402 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045416 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045430 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045449 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045481 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045500 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045534 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045556 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045574 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045594 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045619 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045655 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045760 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045786 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045808 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045824 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045836 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045855 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045873 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045891 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045905 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045921 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045938 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045954 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045968 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.045990 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046008 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046027 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046042 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046060 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046078 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046092 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046118 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046132 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046145 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046165 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046180 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046202 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046217 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046233 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046253 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046268 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046286 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046300 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046315 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046334 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046349 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046362 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046381 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046396 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046445 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046460 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046473 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046492 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046515 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046532 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046545 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046562 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046579 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046593 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046609 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046623 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046697 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046715 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046733 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046749 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046764 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046776 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046795 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046810 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046825 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046840 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046855 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046869 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046882 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046895 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046911 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046925 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046940 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046952 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.046964 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047031 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047049 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047066 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047079 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047092 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047110 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047123 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047140 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047153 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047168 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047184 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047196 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047214 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047227 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047240 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047261 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047274 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047295 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047308 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047322 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047338 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047351 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047364 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047381 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047393 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047409 4992 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047421 4992 reconstruct.go:97] "Volume reconstruction finished" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.047430 4992 reconciler.go:26] "Reconciler: start to sync state" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.053068 4992 manager.go:324] Recovery completed Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.067031 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.069032 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.069087 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.069108 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.069920 4992 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.069938 4992 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.069958 4992 state_mem.go:36] "Initialized new in-memory state store" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.079106 4992 policy_none.go:49] "None policy: Start" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.079964 4992 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.079991 4992 state_mem.go:35] "Initializing new in-memory state store" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.090971 4992 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.093620 4992 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.093692 4992 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.093729 4992 kubelet.go:2335] "Starting kubelet main sync loop" Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.093790 4992 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.095767 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.095837 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.105:6443: connect: connection refused" logger="UnhandledError" Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.124729 4992 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.133755 4992 manager.go:334] "Starting Device Plugin manager" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.133928 4992 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.134132 4992 server.go:79] "Starting device plugin registration server" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.134689 4992 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.134714 4992 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.134884 4992 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.135048 4992 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.135055 4992 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.142082 4992 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.194327 4992 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.194448 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.196279 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.196332 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.196347 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.196562 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.196924 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.196970 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.198164 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.198206 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.198223 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.198260 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.198307 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.198322 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.198341 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.198506 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.198557 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.199456 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.199508 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.199532 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.199708 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.199791 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.199827 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.200386 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.200418 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.200434 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.200908 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.200940 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.200956 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.201038 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.201060 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.201072 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.201111 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.201346 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.201416 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.201902 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.201936 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.201952 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.202165 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.202204 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.202471 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.202510 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.202523 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.203423 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.203484 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.203503 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.225571 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="400ms" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.235143 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.236735 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.236797 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.236813 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.236850 4992 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.237506 4992 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.105:6443: connect: connection refused" node="crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.249915 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.249962 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.249998 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250019 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250041 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250060 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250083 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250106 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250130 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250152 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250173 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250194 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250214 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250235 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.250255 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351449 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351547 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351593 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351625 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351686 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351717 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351747 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351740 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351854 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351777 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351931 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351934 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351999 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351969 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.351999 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352020 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352042 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352078 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352195 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352300 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352347 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352368 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352460 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352527 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352585 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352672 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352532 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352567 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352768 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.352467 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.437982 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.439856 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.439909 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.439924 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.439957 4992 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.440519 4992 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.105:6443: connect: connection refused" node="crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.525693 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.532346 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.550995 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.553518 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-86cc73292f205eaa41b2cf00be2195dd1bcaf0a1f68d6ea08d5017bfca4a3737 WatchSource:0}: Error finding container 86cc73292f205eaa41b2cf00be2195dd1bcaf0a1f68d6ea08d5017bfca4a3737: Status 404 returned error can't find the container with id 86cc73292f205eaa41b2cf00be2195dd1bcaf0a1f68d6ea08d5017bfca4a3737 Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.555373 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c81b62705bc4b75ac70264e7a54cc53ab6378ac4795dd4920623c0005b8ace55 WatchSource:0}: Error finding container c81b62705bc4b75ac70264e7a54cc53ab6378ac4795dd4920623c0005b8ace55: Status 404 returned error can't find the container with id c81b62705bc4b75ac70264e7a54cc53ab6378ac4795dd4920623c0005b8ace55 Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.566298 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.566854 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7bc100493acd3caba2a619540e3e3174e4885d2f2b3158948fd8f39e642cd193 WatchSource:0}: Error finding container 7bc100493acd3caba2a619540e3e3174e4885d2f2b3158948fd8f39e642cd193: Status 404 returned error can't find the container with id 7bc100493acd3caba2a619540e3e3174e4885d2f2b3158948fd8f39e642cd193 Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.576524 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.592202 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-9c9d752b6542b12dea74a1f2315ae45d04e7426406aefc7c2730077b94ae1423 WatchSource:0}: Error finding container 9c9d752b6542b12dea74a1f2315ae45d04e7426406aefc7c2730077b94ae1423: Status 404 returned error can't find the container with id 9c9d752b6542b12dea74a1f2315ae45d04e7426406aefc7c2730077b94ae1423 Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.593161 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-17357a9ba16d5d41775284210341e1120853026441457b75c03104419b611c79 WatchSource:0}: Error finding container 17357a9ba16d5d41775284210341e1120853026441457b75c03104419b611c79: Status 404 returned error can't find the container with id 17357a9ba16d5d41775284210341e1120853026441457b75c03104419b611c79 Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.626583 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="800ms" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.841012 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.842707 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.842748 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.842764 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:56 crc kubenswrapper[4992]: I1211 08:22:56.842790 4992 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.843258 4992 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.105:6443: connect: connection refused" node="crc" Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.893565 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.893739 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.105:6443: connect: connection refused" logger="UnhandledError" Dec 11 08:22:56 crc kubenswrapper[4992]: W1211 08:22:56.977336 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:56 crc kubenswrapper[4992]: E1211 08:22:56.977475 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.105:6443: connect: connection refused" logger="UnhandledError" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.018769 4992 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.023687 4992 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:40:12.907652273 +0000 UTC Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.023754 4992 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 390h17m15.883900077s for next certificate rotation Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.100526 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.100830 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"86cc73292f205eaa41b2cf00be2195dd1bcaf0a1f68d6ea08d5017bfca4a3737"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.101998 4992 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230" exitCode=0 Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.102088 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.102138 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"17357a9ba16d5d41775284210341e1120853026441457b75c03104419b611c79"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.102236 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.103577 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.103620 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.103667 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.104251 4992 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8" exitCode=0 Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.104317 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.104406 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c9d752b6542b12dea74a1f2315ae45d04e7426406aefc7c2730077b94ae1423"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.104707 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.107294 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.107330 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.107382 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.107396 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.108146 4992 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="eec23b5f0e06e0e486f0e7d1516aa3639937e5dac47b1e259a2443d55553a07e" exitCode=0 Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.108177 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"eec23b5f0e06e0e486f0e7d1516aa3639937e5dac47b1e259a2443d55553a07e"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.108216 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7bc100493acd3caba2a619540e3e3174e4885d2f2b3158948fd8f39e642cd193"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.108157 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.108250 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.108268 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.110217 4992 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de" exitCode=0 Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.110241 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.110282 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c81b62705bc4b75ac70264e7a54cc53ab6378ac4795dd4920623c0005b8ace55"} Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.110358 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.111463 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.111487 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.111496 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:57 crc kubenswrapper[4992]: W1211 08:22:57.178725 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:57 crc kubenswrapper[4992]: E1211 08:22:57.179248 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.105:6443: connect: connection refused" logger="UnhandledError" Dec 11 08:22:57 crc kubenswrapper[4992]: W1211 08:22:57.180375 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.105:6443: connect: connection refused Dec 11 08:22:57 crc kubenswrapper[4992]: E1211 08:22:57.180418 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.105:6443: connect: connection refused" logger="UnhandledError" Dec 11 08:22:57 crc kubenswrapper[4992]: E1211 08:22:57.427585 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="1.6s" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.644312 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.646182 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.646220 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.646234 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:57 crc kubenswrapper[4992]: I1211 08:22:57.646263 4992 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 08:22:57 crc kubenswrapper[4992]: E1211 08:22:57.646748 4992 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.105:6443: connect: connection refused" node="crc" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.116924 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.116972 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.116984 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.117059 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.119592 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.119618 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.119666 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.122440 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.122495 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.122518 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.122625 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.124195 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.124234 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.124250 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.135244 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.135330 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.135353 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.135378 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.138747 4992 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a" exitCode=0 Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.138852 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a"} Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.138899 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.139087 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.139959 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.139991 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.140000 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.140529 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.140592 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:58 crc kubenswrapper[4992]: I1211 08:22:58.140618 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.146780 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2"} Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.146932 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.148447 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.148520 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.148540 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.150036 4992 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66" exitCode=0 Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.150105 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66"} Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.150322 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.151934 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.152032 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.152061 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.153433 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e830ccff3b0b171aacba3a03093a0b2e514cb1513e4485b3001620cdb52de376"} Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.153510 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.153567 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.153582 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.153613 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.154813 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.154850 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.154858 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.155153 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.155221 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.155243 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.155268 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.155315 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.155330 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.247342 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.249120 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.249198 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.249228 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.249278 4992 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 08:22:59 crc kubenswrapper[4992]: I1211 08:22:59.985847 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.029805 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.161521 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d"} Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.161609 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc"} Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.161628 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c"} Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.161663 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.161720 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.162754 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.162791 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.162811 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.163593 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.163622 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:00 crc kubenswrapper[4992]: I1211 08:23:00.163661 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.172534 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff"} Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.172618 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba"} Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.172617 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.172557 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.174620 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.174712 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.174728 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.174805 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.174926 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.174951 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.698467 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 11 08:23:01 crc kubenswrapper[4992]: I1211 08:23:01.757483 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.176030 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.176103 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.177476 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.177554 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.177577 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.178013 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.178104 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.178174 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.829507 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.829782 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.834422 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.834505 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.834536 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.838275 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.838444 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.839929 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.840037 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:02 crc kubenswrapper[4992]: I1211 08:23:02.840070 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:03 crc kubenswrapper[4992]: I1211 08:23:03.180163 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:03 crc kubenswrapper[4992]: I1211 08:23:03.181803 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:03 crc kubenswrapper[4992]: I1211 08:23:03.181897 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:03 crc kubenswrapper[4992]: I1211 08:23:03.181969 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:03 crc kubenswrapper[4992]: I1211 08:23:03.578494 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:23:03 crc kubenswrapper[4992]: I1211 08:23:03.579150 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:03 crc kubenswrapper[4992]: I1211 08:23:03.580623 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:03 crc kubenswrapper[4992]: I1211 08:23:03.580758 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:03 crc kubenswrapper[4992]: I1211 08:23:03.580819 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:05 crc kubenswrapper[4992]: I1211 08:23:05.815536 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 11 08:23:05 crc kubenswrapper[4992]: I1211 08:23:05.815781 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:05 crc kubenswrapper[4992]: I1211 08:23:05.817167 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:05 crc kubenswrapper[4992]: I1211 08:23:05.817204 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:05 crc kubenswrapper[4992]: I1211 08:23:05.817214 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:05 crc kubenswrapper[4992]: I1211 08:23:05.838681 4992 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 08:23:05 crc kubenswrapper[4992]: I1211 08:23:05.838740 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 08:23:06 crc kubenswrapper[4992]: E1211 08:23:06.142233 4992 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 08:23:06 crc kubenswrapper[4992]: I1211 08:23:06.921680 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:23:06 crc kubenswrapper[4992]: I1211 08:23:06.921995 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:06 crc kubenswrapper[4992]: I1211 08:23:06.923940 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:06 crc kubenswrapper[4992]: I1211 08:23:06.923980 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:06 crc kubenswrapper[4992]: I1211 08:23:06.924000 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:07 crc kubenswrapper[4992]: I1211 08:23:07.352895 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:23:07 crc kubenswrapper[4992]: I1211 08:23:07.353143 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:07 crc kubenswrapper[4992]: I1211 08:23:07.354522 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:07 crc kubenswrapper[4992]: I1211 08:23:07.354581 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:07 crc kubenswrapper[4992]: I1211 08:23:07.354596 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:07 crc kubenswrapper[4992]: I1211 08:23:07.358192 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:23:08 crc kubenswrapper[4992]: I1211 08:23:08.019470 4992 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 11 08:23:08 crc kubenswrapper[4992]: I1211 08:23:08.195806 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:08 crc kubenswrapper[4992]: I1211 08:23:08.196658 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:08 crc kubenswrapper[4992]: I1211 08:23:08.196703 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:08 crc kubenswrapper[4992]: I1211 08:23:08.196717 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:08 crc kubenswrapper[4992]: I1211 08:23:08.202608 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:23:08 crc kubenswrapper[4992]: W1211 08:23:08.821915 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 11 08:23:08 crc kubenswrapper[4992]: I1211 08:23:08.822037 4992 trace.go:236] Trace[1011008382]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 08:22:58.820) (total time: 10001ms): Dec 11 08:23:08 crc kubenswrapper[4992]: Trace[1011008382]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:23:08.821) Dec 11 08:23:08 crc kubenswrapper[4992]: Trace[1011008382]: [10.00194901s] [10.00194901s] END Dec 11 08:23:08 crc kubenswrapper[4992]: E1211 08:23:08.822090 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 11 08:23:08 crc kubenswrapper[4992]: W1211 08:23:08.829245 4992 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 11 08:23:08 crc kubenswrapper[4992]: I1211 08:23:08.829298 4992 trace.go:236] Trace[1346160591]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 08:22:58.826) (total time: 10002ms): Dec 11 08:23:08 crc kubenswrapper[4992]: Trace[1346160591]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (08:23:08.829) Dec 11 08:23:08 crc kubenswrapper[4992]: Trace[1346160591]: [10.002708614s] [10.002708614s] END Dec 11 08:23:08 crc kubenswrapper[4992]: E1211 08:23:08.829314 4992 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 11 08:23:09 crc kubenswrapper[4992]: E1211 08:23:09.029451 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 11 08:23:09 crc kubenswrapper[4992]: I1211 08:23:09.202105 4992 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 08:23:09 crc kubenswrapper[4992]: I1211 08:23:09.202209 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 08:23:09 crc kubenswrapper[4992]: I1211 08:23:09.202847 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:09 crc kubenswrapper[4992]: I1211 08:23:09.204809 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:09 crc kubenswrapper[4992]: I1211 08:23:09.204861 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:09 crc kubenswrapper[4992]: I1211 08:23:09.204877 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:09 crc kubenswrapper[4992]: I1211 08:23:09.210479 4992 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 08:23:09 crc kubenswrapper[4992]: I1211 08:23:09.210559 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 08:23:11 crc kubenswrapper[4992]: I1211 08:23:11.763047 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:23:11 crc kubenswrapper[4992]: I1211 08:23:11.763266 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:11 crc kubenswrapper[4992]: I1211 08:23:11.764461 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:11 crc kubenswrapper[4992]: I1211 08:23:11.764541 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:11 crc kubenswrapper[4992]: I1211 08:23:11.764554 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:11 crc kubenswrapper[4992]: I1211 08:23:11.767952 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:23:12 crc kubenswrapper[4992]: I1211 08:23:12.213340 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:23:12 crc kubenswrapper[4992]: I1211 08:23:12.213399 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:12 crc kubenswrapper[4992]: I1211 08:23:12.214606 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:12 crc kubenswrapper[4992]: I1211 08:23:12.214675 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:12 crc kubenswrapper[4992]: I1211 08:23:12.214693 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:13 crc kubenswrapper[4992]: I1211 08:23:13.032133 4992 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.191324 4992 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 11 08:23:14 crc kubenswrapper[4992]: E1211 08:23:14.192793 4992 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.192975 4992 trace.go:236] Trace[935998785]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 08:23:00.052) (total time: 14140ms): Dec 11 08:23:14 crc kubenswrapper[4992]: Trace[935998785]: ---"Objects listed" error: 14139ms (08:23:14.192) Dec 11 08:23:14 crc kubenswrapper[4992]: Trace[935998785]: [14.140000874s] [14.140000874s] END Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.192998 4992 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.193701 4992 trace.go:236] Trace[312446174]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 08:22:59.643) (total time: 14550ms): Dec 11 08:23:14 crc kubenswrapper[4992]: Trace[312446174]: ---"Objects listed" error: 14550ms (08:23:14.193) Dec 11 08:23:14 crc kubenswrapper[4992]: Trace[312446174]: [14.550308867s] [14.550308867s] END Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.193739 4992 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.235667 4992 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59196->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.235771 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59196->192.168.126.11:17697: read: connection reset by peer" Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.236145 4992 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.236177 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.623600 4992 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.978034 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:23:14 crc kubenswrapper[4992]: I1211 08:23:14.982109 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.039037 4992 apiserver.go:52] "Watching apiserver" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.053403 4992 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.054042 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-image-registry/node-ca-bjdzd","openshift-machine-config-operator/machine-config-daemon-m8b9c","openshift-multus/multus-additional-cni-plugins-2x9m4","openshift-multus/multus-lglcz","openshift-ovn-kubernetes/ovnkube-node-fbd2b","openshift-dns/node-resolver-6cjmj","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.054698 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.054729 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.054858 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.054851 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.054955 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.054729 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.055156 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.055696 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.055754 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.055814 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.055921 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.056158 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.056679 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6cjmj" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.056781 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.056833 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.061678 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.062486 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.062597 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.062965 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.063042 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.063289 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.063815 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.065774 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.065793 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.066700 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.066977 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.067195 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.070912 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.071158 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.073168 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.073563 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.073776 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.074153 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.074738 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.074982 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075155 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075231 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075276 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075400 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075481 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075563 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075584 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075703 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075783 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075819 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075900 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.075950 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.077608 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.077722 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.077981 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.115618 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.131458 4992 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.160924 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197588 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197656 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197677 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197695 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197716 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197732 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197747 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197763 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197779 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197794 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197808 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197823 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197840 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197856 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197870 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197887 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.197902 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198050 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198068 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198083 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198100 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198137 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198153 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198168 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198184 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198198 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198216 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198232 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198248 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198265 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198279 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198295 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198311 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198326 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198346 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198361 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198378 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198399 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198414 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198430 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198457 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198486 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198511 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198536 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198557 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198571 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198588 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198602 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198617 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198650 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198666 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198690 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198705 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198722 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198744 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198758 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198774 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198791 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198787 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198806 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198862 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198889 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198914 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198933 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198950 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198970 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198990 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199006 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199021 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199033 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199038 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199084 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199112 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199170 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199214 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199233 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199249 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199266 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199281 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199297 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199314 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199330 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199346 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199364 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199379 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199398 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199413 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199429 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199446 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199462 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199477 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199492 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199509 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199525 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199542 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199558 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199573 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199590 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199606 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199621 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199653 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199669 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199686 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199701 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199717 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199734 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199751 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199768 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199784 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199786 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199801 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199787 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199819 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199913 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199939 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199962 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.199993 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200023 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200045 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200067 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200087 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200103 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200109 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200147 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200172 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200193 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200212 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200233 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200250 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200328 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200346 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200364 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200380 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200398 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200418 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200435 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200452 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200472 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200490 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200510 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200528 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200546 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200564 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200581 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200598 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200620 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200652 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200676 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200723 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200742 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200758 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200775 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200791 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200809 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200825 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200844 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200863 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200880 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200890 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200896 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200942 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200966 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.200989 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201011 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201070 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201125 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201115 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201305 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201315 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201405 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201434 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201453 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201472 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201490 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201506 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201524 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201544 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201565 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201581 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201597 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201619 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201660 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201678 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201698 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201721 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201747 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201772 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201795 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201814 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201831 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201849 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201866 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201892 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201920 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201939 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201956 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201976 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201997 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202018 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202036 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202055 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202075 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202094 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202112 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202131 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202150 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202216 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-var-lib-kubelet\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202239 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-etc-kubernetes\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202263 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-systemd-units\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202284 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-ovn\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202309 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/38d638dc-8df2-4332-9ffe-cb15ddbe91f3-hosts-file\") pod \"node-resolver-6cjmj\" (UID: \"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\") " pod="openshift-dns/node-resolver-6cjmj" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202329 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-run-multus-certs\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202353 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j64k\" (UniqueName: \"kubernetes.io/projected/5838adfc-502f-44ac-be33-14f964497c4f-kube-api-access-9j64k\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202374 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-system-cni-dir\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202398 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202416 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj5v4\" (UniqueName: \"kubernetes.io/projected/38d638dc-8df2-4332-9ffe-cb15ddbe91f3-kube-api-access-zj5v4\") pod \"node-resolver-6cjmj\" (UID: \"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\") " pod="openshift-dns/node-resolver-6cjmj" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202435 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-os-release\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202452 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-var-lib-openvswitch\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202468 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-rootfs\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202489 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202507 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-etc-openvswitch\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202530 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202549 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-multus-socket-dir-parent\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202565 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-var-lib-cni-bin\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202581 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-bin\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202598 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d3be416d-184d-47f9-846a-6304666886fe-cni-binary-copy\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202615 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-multus-conf-dir\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202652 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202677 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202697 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-system-cni-dir\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202718 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-multus-cni-dir\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202736 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-cnibin\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202757 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202777 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkl7x\" (UniqueName: \"kubernetes.io/projected/d3be416d-184d-47f9-846a-6304666886fe-kube-api-access-fkl7x\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202811 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/32e8bfcd-0901-4994-a7c3-3c33f8a4b67c-serviceca\") pod \"node-ca-bjdzd\" (UID: \"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\") " pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202833 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5838adfc-502f-44ac-be33-14f964497c4f-multus-daemon-config\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202849 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-node-log\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202864 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-cnibin\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202885 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202903 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-slash\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202924 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-openvswitch\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202943 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-script-lib\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202962 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8trsf\" (UniqueName: \"kubernetes.io/projected/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-kube-api-access-8trsf\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202984 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203003 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-hostroot\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203019 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-kubelet\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203037 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203056 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203080 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203104 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4ccj\" (UniqueName: \"kubernetes.io/projected/32e8bfcd-0901-4994-a7c3-3c33f8a4b67c-kube-api-access-w4ccj\") pod \"node-ca-bjdzd\" (UID: \"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\") " pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203129 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203151 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-run-k8s-cni-cncf-io\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203181 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k9hp\" (UniqueName: \"kubernetes.io/projected/216d94db-3002-48a3-b3c2-2a3201f4d6cd-kube-api-access-6k9hp\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203234 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-mcd-auth-proxy-config\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203257 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d3be416d-184d-47f9-846a-6304666886fe-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203276 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5838adfc-502f-44ac-be33-14f964497c4f-cni-binary-copy\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203589 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-systemd\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203608 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-config\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203623 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-env-overrides\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203659 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-os-release\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203677 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-var-lib-cni-multus\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203693 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-netns\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203709 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-log-socket\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203724 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203746 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203763 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-netd\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203780 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-proxy-tls\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203797 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32e8bfcd-0901-4994-a7c3-3c33f8a4b67c-host\") pod \"node-ca-bjdzd\" (UID: \"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\") " pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203813 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovn-node-metrics-cert\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203830 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203855 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203885 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-run-netns\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203908 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204010 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204029 4992 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204044 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204058 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204072 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204085 4992 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204100 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204114 4992 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204128 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204141 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201333 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201444 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201518 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201625 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201696 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201708 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201897 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201917 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.201989 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202324 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202425 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202416 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202673 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.209742 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202727 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.202759 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203002 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203130 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203559 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203647 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.203722 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204192 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.204375 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.205103 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.205263 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.205594 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.205591 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.205886 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.205893 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.208780 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.209256 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.209354 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.209538 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.209781 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.210093 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.210289 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.210439 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.210718 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.210750 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.210795 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.210982 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.211165 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.211184 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.211219 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.211573 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.211757 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.211783 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.211990 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.212039 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.212085 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.212131 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:23:15.712101836 +0000 UTC m=+19.971575762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.212275 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.212372 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.212452 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.212526 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.215237 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.215366 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.215525 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.215842 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.216008 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.216030 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.216305 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.216542 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.216606 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.216508 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.217023 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.217341 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.218535 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.218741 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.218826 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.218840 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.218881 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.218968 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.219369 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.219374 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.219422 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.219474 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.219701 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.219716 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.219798 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.220030 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.220099 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.220178 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.220047 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.220337 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.220607 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.220816 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.220811 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.221711 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.221741 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.222904 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.222988 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223076 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223173 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223373 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223435 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223526 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223653 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223675 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223915 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223944 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.223973 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.224120 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.224260 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.224348 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.224555 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.224591 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.225078 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.225310 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.225463 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.225742 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.225782 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.225935 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.226274 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.226444 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.226889 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.226930 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.227017 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.227264 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.227557 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.227792 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.227890 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.227913 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.227965 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.228110 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.229826 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.229067 4992 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.230404 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.230572 4992 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.230689 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:15.730662918 +0000 UTC m=+19.990136914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.230741 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.230780 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.230867 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.231000 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.231499 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.231722 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.232019 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.232045 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.232397 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.232788 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.232812 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.232941 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.233087 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.233100 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.233734 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.233801 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.233841 4992 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.233935 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:15.733913577 +0000 UTC m=+19.993387493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.234228 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.235106 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.235799 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.236324 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.236449 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.236517 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.236981 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.237081 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.239310 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.231920 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.240021 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.240530 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.241031 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.241309 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.241456 4992 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2" exitCode=255 Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.242279 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2"} Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.242910 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.243201 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.243511 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.245331 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.245489 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.198405 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.245490 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.245549 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.245569 4992 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.245743 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:15.745687625 +0000 UTC m=+20.005161741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.245834 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.246742 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.249743 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.250062 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.249919 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.254130 4992 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.254569 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.258316 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.258515 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.259996 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.260625 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.260975 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.261027 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.261049 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.261065 4992 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.261168 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:15.761136164 +0000 UTC m=+20.020610090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.261212 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.262623 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.264250 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.266473 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.267326 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.267317 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.267607 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.268149 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.268468 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.268773 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.268822 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.270101 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.271868 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.271897 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.272709 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.272993 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.273212 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.284420 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.295797 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.297733 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.302836 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.303975 4992 scope.go:117] "RemoveContainer" containerID="382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.304766 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj5v4\" (UniqueName: \"kubernetes.io/projected/38d638dc-8df2-4332-9ffe-cb15ddbe91f3-kube-api-access-zj5v4\") pod \"node-resolver-6cjmj\" (UID: \"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\") " pod="openshift-dns/node-resolver-6cjmj" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.304895 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-os-release\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305020 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-var-lib-openvswitch\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305129 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-rootfs\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305245 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-etc-openvswitch\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305400 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-multus-socket-dir-parent\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305516 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-var-lib-cni-bin\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305597 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-etc-openvswitch\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305608 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-bin\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305711 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d3be416d-184d-47f9-846a-6304666886fe-cni-binary-copy\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305756 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-system-cni-dir\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305777 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-multus-cni-dir\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305772 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-var-lib-openvswitch\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305796 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-cnibin\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305817 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-multus-conf-dir\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305839 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkl7x\" (UniqueName: \"kubernetes.io/projected/d3be416d-184d-47f9-846a-6304666886fe-kube-api-access-fkl7x\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305860 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/32e8bfcd-0901-4994-a7c3-3c33f8a4b67c-serviceca\") pod \"node-ca-bjdzd\" (UID: \"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\") " pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305880 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5838adfc-502f-44ac-be33-14f964497c4f-multus-daemon-config\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305896 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-node-log\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305928 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-slash\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305972 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-system-cni-dir\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305981 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-var-lib-cni-bin\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.306339 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-multus-socket-dir-parent\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.306468 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-bin\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.305255 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-os-release\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.306590 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-cnibin\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.306741 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-multus-conf-dir\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.306780 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-multus-cni-dir\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.306959 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-slash\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.306825 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-node-log\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307015 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-openvswitch\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307041 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-script-lib\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307060 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8trsf\" (UniqueName: \"kubernetes.io/projected/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-kube-api-access-8trsf\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307078 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-cnibin\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307099 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-hostroot\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307118 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-kubelet\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307137 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307176 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-run-k8s-cni-cncf-io\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307196 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k9hp\" (UniqueName: \"kubernetes.io/projected/216d94db-3002-48a3-b3c2-2a3201f4d6cd-kube-api-access-6k9hp\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.307935 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-openvswitch\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.308229 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-hostroot\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.308370 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.308395 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-kubelet\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.308419 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-cnibin\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.308442 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-run-k8s-cni-cncf-io\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.316506 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d3be416d-184d-47f9-846a-6304666886fe-cni-binary-copy\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.316587 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-rootfs\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317456 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/32e8bfcd-0901-4994-a7c3-3c33f8a4b67c-serviceca\") pod \"node-ca-bjdzd\" (UID: \"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\") " pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317527 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-mcd-auth-proxy-config\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317556 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4ccj\" (UniqueName: \"kubernetes.io/projected/32e8bfcd-0901-4994-a7c3-3c33f8a4b67c-kube-api-access-w4ccj\") pod \"node-ca-bjdzd\" (UID: \"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\") " pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317584 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5838adfc-502f-44ac-be33-14f964497c4f-cni-binary-copy\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317611 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-systemd\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317651 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-config\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317670 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-env-overrides\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317693 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-os-release\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317720 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d3be416d-184d-47f9-846a-6304666886fe-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317744 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-var-lib-cni-multus\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317766 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-netns\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317785 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-log-socket\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317809 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317850 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-netd\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317876 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-proxy-tls\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317898 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32e8bfcd-0901-4994-a7c3-3c33f8a4b67c-host\") pod \"node-ca-bjdzd\" (UID: \"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\") " pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317925 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317948 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317975 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-run-netns\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.317998 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovn-node-metrics-cert\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318024 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318046 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-var-lib-kubelet\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318066 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-etc-kubernetes\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318087 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-systemd-units\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318110 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-ovn\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318132 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/38d638dc-8df2-4332-9ffe-cb15ddbe91f3-hosts-file\") pod \"node-resolver-6cjmj\" (UID: \"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\") " pod="openshift-dns/node-resolver-6cjmj" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318153 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-run-multus-certs\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318178 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j64k\" (UniqueName: \"kubernetes.io/projected/5838adfc-502f-44ac-be33-14f964497c4f-kube-api-access-9j64k\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318201 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-system-cni-dir\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318394 4992 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318423 4992 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318437 4992 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318450 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318461 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318479 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318490 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318502 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318512 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318528 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318539 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318550 4992 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318563 4992 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318582 4992 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318593 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318604 4992 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318618 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318644 4992 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318655 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318665 4992 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318678 4992 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318689 4992 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318743 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318755 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318770 4992 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318781 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318791 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318808 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318818 4992 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318828 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318837 4992 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318849 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318859 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318869 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318881 4992 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318896 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318905 4992 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318914 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318926 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318941 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318956 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318968 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318983 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.318994 4992 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319005 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319014 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319028 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319036 4992 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319046 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319056 4992 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319070 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319079 4992 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319089 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319099 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319113 4992 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319123 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319134 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319147 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319158 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319168 4992 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319177 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319191 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319201 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319210 4992 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319219 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319232 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319242 4992 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319251 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319264 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319298 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319307 4992 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319316 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319331 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319341 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319350 4992 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319362 4992 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319384 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319392 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319401 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319410 4992 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319423 4992 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319433 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319441 4992 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319454 4992 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319462 4992 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319471 4992 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319480 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319491 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319680 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319695 4992 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319707 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.319883 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-system-cni-dir\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320407 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-script-lib\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320435 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5838adfc-502f-44ac-be33-14f964497c4f-multus-daemon-config\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320653 4992 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320746 4992 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320780 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320801 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320817 4992 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320834 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320852 4992 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320852 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-mcd-auth-proxy-config\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320864 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320875 4992 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320891 4992 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320906 4992 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320919 4992 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320929 4992 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320940 4992 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320954 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320964 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320976 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320989 4992 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320999 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321008 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321018 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321033 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.320991 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321049 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321061 4992 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321072 4992 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321087 4992 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321101 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321113 4992 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321123 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321139 4992 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321148 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321157 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321171 4992 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321181 4992 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321190 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321200 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321212 4992 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321223 4992 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321233 4992 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321244 4992 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321257 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321268 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321280 4992 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321295 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321306 4992 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321316 4992 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321327 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321341 4992 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321353 4992 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321363 4992 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321359 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321372 4992 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321421 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321433 4992 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321444 4992 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321436 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-var-lib-cni-multus\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321466 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321549 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321565 4992 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321575 4992 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321597 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321644 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321656 4992 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321668 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321681 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321717 4992 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321729 4992 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321739 4992 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321753 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321766 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321796 4992 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321810 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321825 4992 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321836 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321846 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321857 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321895 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321909 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321919 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321930 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321966 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321977 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321989 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.322003 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.322014 4992 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.322127 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d3be416d-184d-47f9-846a-6304666886fe-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.322691 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5838adfc-502f-44ac-be33-14f964497c4f-cni-binary-copy\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.322747 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-systemd\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323214 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323264 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-config\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323354 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323396 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-var-lib-kubelet\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323428 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-etc-kubernetes\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323460 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-systemd-units\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323499 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-ovn\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323541 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/38d638dc-8df2-4332-9ffe-cb15ddbe91f3-hosts-file\") pod \"node-resolver-6cjmj\" (UID: \"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\") " pod="openshift-dns/node-resolver-6cjmj" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323566 4992 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323574 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-run-multus-certs\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323592 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323612 4992 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323650 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323674 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323685 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323701 4992 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323711 4992 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.321385 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32e8bfcd-0901-4994-a7c3-3c33f8a4b67c-host\") pod \"node-ca-bjdzd\" (UID: \"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\") " pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323797 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-netns\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323824 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-ovn-kubernetes\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323856 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-netd\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323867 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-log-socket\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323930 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d3be416d-184d-47f9-846a-6304666886fe-os-release\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.324569 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-env-overrides\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.323286 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5838adfc-502f-44ac-be33-14f964497c4f-host-run-netns\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.326595 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.328774 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.329747 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovn-node-metrics-cert\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.333696 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkl7x\" (UniqueName: \"kubernetes.io/projected/d3be416d-184d-47f9-846a-6304666886fe-kube-api-access-fkl7x\") pod \"multus-additional-cni-plugins-2x9m4\" (UID: \"d3be416d-184d-47f9-846a-6304666886fe\") " pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.334917 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-proxy-tls\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.339591 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k9hp\" (UniqueName: \"kubernetes.io/projected/216d94db-3002-48a3-b3c2-2a3201f4d6cd-kube-api-access-6k9hp\") pod \"ovnkube-node-fbd2b\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.340099 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8trsf\" (UniqueName: \"kubernetes.io/projected/fa42ae65-5fda-421e-b27a-6d8a0b2defb3-kube-api-access-8trsf\") pod \"machine-config-daemon-m8b9c\" (UID: \"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\") " pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.340233 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj5v4\" (UniqueName: \"kubernetes.io/projected/38d638dc-8df2-4332-9ffe-cb15ddbe91f3-kube-api-access-zj5v4\") pod \"node-resolver-6cjmj\" (UID: \"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\") " pod="openshift-dns/node-resolver-6cjmj" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.341210 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.344143 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4ccj\" (UniqueName: \"kubernetes.io/projected/32e8bfcd-0901-4994-a7c3-3c33f8a4b67c-kube-api-access-w4ccj\") pod \"node-ca-bjdzd\" (UID: \"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\") " pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.344232 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j64k\" (UniqueName: \"kubernetes.io/projected/5838adfc-502f-44ac-be33-14f964497c4f-kube-api-access-9j64k\") pod \"multus-lglcz\" (UID: \"5838adfc-502f-44ac-be33-14f964497c4f\") " pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.359785 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.374492 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.380067 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.381934 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.387957 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.395125 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.396437 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.404913 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lglcz" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.410325 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.416950 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:15 crc kubenswrapper[4992]: W1211 08:23:15.420382 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ff81b9a9fc00638166ebf0fcd3f3bba3f2dc3d91e6fefeebaa0bc4b38165aaef WatchSource:0}: Error finding container ff81b9a9fc00638166ebf0fcd3f3bba3f2dc3d91e6fefeebaa0bc4b38165aaef: Status 404 returned error can't find the container with id ff81b9a9fc00638166ebf0fcd3f3bba3f2dc3d91e6fefeebaa0bc4b38165aaef Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.425131 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.425160 4992 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.426261 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.432126 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6cjmj" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.441045 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bjdzd" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.447427 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.451155 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.472013 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.484124 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.522005 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: W1211 08:23:15.530781 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216d94db_3002_48a3_b3c2_2a3201f4d6cd.slice/crio-5c0e5e5760bb8408585572087aa2a9fc777c22071fca5224966c934afe3720fc WatchSource:0}: Error finding container 5c0e5e5760bb8408585572087aa2a9fc777c22071fca5224966c934afe3720fc: Status 404 returned error can't find the container with id 5c0e5e5760bb8408585572087aa2a9fc777c22071fca5224966c934afe3720fc Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.541143 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: W1211 08:23:15.544757 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa42ae65_5fda_421e_b27a_6d8a0b2defb3.slice/crio-edcbe3781804769f3993281b33a1a206d968fd6a2baaf4c767f05faac66757f1 WatchSource:0}: Error finding container edcbe3781804769f3993281b33a1a206d968fd6a2baaf4c767f05faac66757f1: Status 404 returned error can't find the container with id edcbe3781804769f3993281b33a1a206d968fd6a2baaf4c767f05faac66757f1 Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.558361 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: W1211 08:23:15.562428 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32e8bfcd_0901_4994_a7c3_3c33f8a4b67c.slice/crio-8542d8545cb3e2ecd97038e8856e95b2940ee597b46964c4df6c1d263201d866 WatchSource:0}: Error finding container 8542d8545cb3e2ecd97038e8856e95b2940ee597b46964c4df6c1d263201d866: Status 404 returned error can't find the container with id 8542d8545cb3e2ecd97038e8856e95b2940ee597b46964c4df6c1d263201d866 Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.587244 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.607931 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.638056 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.664233 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.684983 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.701894 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.719051 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.732702 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.732825 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.732998 4992 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.733066 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:16.733047381 +0000 UTC m=+20.992521307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.733251 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:23:16.733202956 +0000 UTC m=+20.992676982 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.733382 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.750851 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.771977 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.801755 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.834480 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.835400 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.835464 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.835598 4992 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.835731 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:16.835704801 +0000 UTC m=+21.095178727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.835867 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.835895 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.835979 4992 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.836051 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:16.83601478 +0000 UTC m=+21.095488706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.836148 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.836167 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.836993 4992 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:15 crc kubenswrapper[4992]: E1211 08:23:15.837351 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:16.837333556 +0000 UTC m=+21.096807482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.851239 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.868397 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.874944 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.875602 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.889612 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.914543 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.926821 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.942903 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:15 crc kubenswrapper[4992]: I1211 08:23:15.996989 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.024072 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.067758 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.093206 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.094108 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.094195 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.094249 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.094446 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.098841 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.099357 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.100742 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.101381 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.102401 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.103062 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.103802 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.104766 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.105594 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.107257 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.108742 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.108882 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.110178 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.111402 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.112308 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.113029 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.114118 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.114992 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.115986 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.116595 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.117244 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.118251 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.118906 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.119382 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.120681 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.121152 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.122246 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.123071 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.124267 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.124993 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.126095 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.126766 4992 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.126879 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.129297 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.130102 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.130601 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.132435 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.133885 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.134651 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.134808 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.135942 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.136788 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.137744 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.139055 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.141679 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.142874 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.144067 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.144926 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.146262 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.148597 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.149613 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.150283 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.151221 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.152509 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.154365 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.155379 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.160587 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.173205 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.189347 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.208959 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.222868 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.237739 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.257902 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.261707 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bjdzd" event={"ID":"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c","Type":"ContainerStarted","Data":"16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.261756 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bjdzd" event={"ID":"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c","Type":"ContainerStarted","Data":"8542d8545cb3e2ecd97038e8856e95b2940ee597b46964c4df6c1d263201d866"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.264256 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lglcz" event={"ID":"5838adfc-502f-44ac-be33-14f964497c4f","Type":"ContainerStarted","Data":"59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.264368 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lglcz" event={"ID":"5838adfc-502f-44ac-be33-14f964497c4f","Type":"ContainerStarted","Data":"93c9f972098619f8e3ab3c246e15821c50383bca5a96ac89cca2182232b93ee6"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.266843 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.268673 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.269162 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.269799 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6cjmj" event={"ID":"38d638dc-8df2-4332-9ffe-cb15ddbe91f3","Type":"ContainerStarted","Data":"f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.269863 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6cjmj" event={"ID":"38d638dc-8df2-4332-9ffe-cb15ddbe91f3","Type":"ContainerStarted","Data":"b9a56227f90eef254ecef8307054889e577829b98261417a580eb07436ef9a40"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.271969 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerStarted","Data":"abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.272061 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerStarted","Data":"0d422e1982c074d54a0085360c3f17eda675c4ee25764c1c85fba8d854603e62"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.273869 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a" exitCode=0 Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.273950 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.274013 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"5c0e5e5760bb8408585572087aa2a9fc777c22071fca5224966c934afe3720fc"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.276219 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.276705 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.276771 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.276787 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a751d8cfe4d21438c4f549da2d0d512d35b760bd588da720f0cd50a60fdeb4f0"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.288731 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.292089 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.292172 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.292203 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"edcbe3781804769f3993281b33a1a206d968fd6a2baaf4c767f05faac66757f1"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.294261 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ff81b9a9fc00638166ebf0fcd3f3bba3f2dc3d91e6fefeebaa0bc4b38165aaef"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.295845 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.295910 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"09407cee5fe7e627933151c63c662f5f00907e1c418cd2e422623555b6b3e558"} Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.306464 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.329700 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.346556 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.357471 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.373778 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.400754 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.417764 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.436766 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.460988 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.477747 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.493866 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.509100 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.538186 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.579565 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.612567 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.653291 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.693698 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.737974 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.752552 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.752787 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.752943 4992 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.753056 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:23:18.752949947 +0000 UTC m=+23.012423873 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.753179 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:18.753163034 +0000 UTC m=+23.012637110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.778075 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.814329 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.851164 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.854167 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.854284 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.854357 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.854519 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.854592 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.854665 4992 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.854750 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:18.854737843 +0000 UTC m=+23.114211769 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.854849 4992 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.854918 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:18.854910288 +0000 UTC m=+23.114384214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.855022 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.855082 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.855132 4992 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:16 crc kubenswrapper[4992]: E1211 08:23:16.855199 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:18.855191505 +0000 UTC m=+23.114665431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.901731 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.937669 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:16 crc kubenswrapper[4992]: I1211 08:23:16.968828 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.094304 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:17 crc kubenswrapper[4992]: E1211 08:23:17.094435 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.305454 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.305582 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.308247 4992 generic.go:334] "Generic (PLEG): container finished" podID="d3be416d-184d-47f9-846a-6304666886fe" containerID="abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130" exitCode=0 Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.308343 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerDied","Data":"abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.347836 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.371697 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.393697 4992 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.403723 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.403781 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.403793 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.403831 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.403945 4992 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.426127 4992 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.426507 4992 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.427746 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.427781 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.427806 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.427825 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.427835 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:17Z","lastTransitionTime":"2025-12-11T08:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.442158 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: E1211 08:23:17.463690 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.480166 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.480223 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.480235 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.480258 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.480269 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:17Z","lastTransitionTime":"2025-12-11T08:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.482448 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: E1211 08:23:17.506160 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.508043 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.517984 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.518041 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.518055 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.518079 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.518094 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:17Z","lastTransitionTime":"2025-12-11T08:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.532772 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: E1211 08:23:17.535931 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.540588 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.540621 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.540647 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.540664 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.540674 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:17Z","lastTransitionTime":"2025-12-11T08:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.548768 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: E1211 08:23:17.555170 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.565969 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.566007 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.566018 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.566037 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.566050 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:17Z","lastTransitionTime":"2025-12-11T08:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.569436 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: E1211 08:23:17.579959 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: E1211 08:23:17.580102 4992 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.593904 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.593949 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.593961 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.593978 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.593992 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:17Z","lastTransitionTime":"2025-12-11T08:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.602262 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.620378 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.648686 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.687081 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.700766 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.700821 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.700837 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.700856 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.700884 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:17Z","lastTransitionTime":"2025-12-11T08:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.706413 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.724747 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:17Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.803662 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.803707 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.803716 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.803734 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.803746 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:17Z","lastTransitionTime":"2025-12-11T08:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.912915 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.912973 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.912985 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.913007 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:17 crc kubenswrapper[4992]: I1211 08:23:17.913020 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:17Z","lastTransitionTime":"2025-12-11T08:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.015883 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.016394 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.016404 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.016424 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.016437 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.095030 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.095070 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.095593 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.095623 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.119341 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.119407 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.119418 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.119445 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.119457 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.223021 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.223073 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.223084 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.223110 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.223124 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.315394 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerStarted","Data":"d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.319328 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.319402 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.319415 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.319427 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.325363 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.325406 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.325420 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.325443 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.325479 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.332182 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.343827 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.357011 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.379040 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.397947 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.411930 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.428805 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.429109 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.429201 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.429283 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.429348 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.430873 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.449375 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.468440 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.483592 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.498354 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.519465 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.531330 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.531382 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.531392 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.531415 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.531429 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.533759 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.548181 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.557990 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.634490 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.634559 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.634571 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.634592 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.634603 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.737103 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.737159 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.737177 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.737195 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.737206 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.779784 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.779938 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.780053 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:23:22.780011273 +0000 UTC m=+27.039485189 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.780071 4992 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.780176 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:22.780158906 +0000 UTC m=+27.039633052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.845962 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.846490 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.846561 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.846653 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.846734 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.881028 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.881116 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.881157 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881259 4992 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881300 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881327 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881339 4992 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881332 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881376 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881390 4992 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881405 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:22.881376748 +0000 UTC m=+27.140850674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881431 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:22.881421539 +0000 UTC m=+27.140895465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:18 crc kubenswrapper[4992]: E1211 08:23:18.881450 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:22.88144224 +0000 UTC m=+27.140916166 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.950040 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.950092 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.950105 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.950125 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:18 crc kubenswrapper[4992]: I1211 08:23:18.950137 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:18Z","lastTransitionTime":"2025-12-11T08:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.053690 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.053749 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.053758 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.053773 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.053788 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.094081 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:19 crc kubenswrapper[4992]: E1211 08:23:19.094250 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.156692 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.156756 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.156776 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.156801 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.156818 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.260618 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.260693 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.260705 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.260726 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.260741 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.328879 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.331187 4992 generic.go:334] "Generic (PLEG): container finished" podID="d3be416d-184d-47f9-846a-6304666886fe" containerID="d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f" exitCode=0 Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.331243 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerDied","Data":"d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.345562 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.362306 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.363911 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.363965 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.363980 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.364003 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.364016 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.375843 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.389332 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.402925 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.415849 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.427655 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.449225 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.466847 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.466895 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.466907 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.466930 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.466947 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.466960 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.481603 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.497921 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.513892 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.528292 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.542774 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.566377 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.569573 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.569623 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.569669 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.569691 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.569704 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.585695 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.604569 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.616125 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.632121 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.666872 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.675845 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.675903 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.675912 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.675930 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.675939 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.692058 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.728374 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.761312 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.779693 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.779758 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.779769 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.779790 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.779800 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.781458 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.794403 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.812908 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.829335 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.844148 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.864270 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.880009 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.883364 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.883412 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.883425 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.883448 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.883461 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.986677 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.986741 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.986757 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.986778 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:19 crc kubenswrapper[4992]: I1211 08:23:19.986789 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:19Z","lastTransitionTime":"2025-12-11T08:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.089038 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.089085 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.089096 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.089116 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.089130 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:20Z","lastTransitionTime":"2025-12-11T08:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.094441 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.094538 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:20 crc kubenswrapper[4992]: E1211 08:23:20.094615 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:20 crc kubenswrapper[4992]: E1211 08:23:20.095202 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.192070 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.192127 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.192137 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.192158 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.192171 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:20Z","lastTransitionTime":"2025-12-11T08:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.294942 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.295013 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.295038 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.295069 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.295093 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:20Z","lastTransitionTime":"2025-12-11T08:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.337856 4992 generic.go:334] "Generic (PLEG): container finished" podID="d3be416d-184d-47f9-846a-6304666886fe" containerID="bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e" exitCode=0 Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.337933 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerDied","Data":"bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.344555 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.358445 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.379229 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.393517 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.397875 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.397922 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.397938 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.397998 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.398017 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:20Z","lastTransitionTime":"2025-12-11T08:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.411331 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.425189 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.439449 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.454542 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.466839 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.480171 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.499983 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.501463 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.501514 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.501529 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.501556 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.501572 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:20Z","lastTransitionTime":"2025-12-11T08:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.512871 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.526358 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.587958 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.604038 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.604069 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.604077 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.604093 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.604102 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:20Z","lastTransitionTime":"2025-12-11T08:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.607539 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.621754 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.707220 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.707271 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.707280 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.707297 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.707307 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:20Z","lastTransitionTime":"2025-12-11T08:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.810035 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.810086 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.810096 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.810118 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.810129 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:20Z","lastTransitionTime":"2025-12-11T08:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.913423 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.913465 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.913476 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.913491 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:20 crc kubenswrapper[4992]: I1211 08:23:20.913501 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:20Z","lastTransitionTime":"2025-12-11T08:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.017719 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.017769 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.017784 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.017803 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.017814 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.094376 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:21 crc kubenswrapper[4992]: E1211 08:23:21.094589 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.120850 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.120902 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.120916 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.120941 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.120956 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.223704 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.223749 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.223760 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.223782 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.223794 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.326252 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.326321 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.326344 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.326372 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.326393 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.351433 4992 generic.go:334] "Generic (PLEG): container finished" podID="d3be416d-184d-47f9-846a-6304666886fe" containerID="8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257" exitCode=0 Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.351484 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerDied","Data":"8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.368100 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.393848 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.414780 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.425994 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.430333 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.430395 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.430409 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.430431 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.430443 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.441824 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.456238 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.479290 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.494480 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.508364 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.523779 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.533546 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.533599 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.533613 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.533651 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.533664 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.550375 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.567027 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.580597 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.592063 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.606913 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.636424 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.636463 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.636491 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.636509 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.636518 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.739539 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.739610 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.739650 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.739684 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.739699 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.842278 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.842324 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.842333 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.842348 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.842358 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.945967 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.946041 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.946067 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.946098 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:21 crc kubenswrapper[4992]: I1211 08:23:21.946118 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:21Z","lastTransitionTime":"2025-12-11T08:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.053688 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.053764 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.053777 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.053796 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.053814 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:22Z","lastTransitionTime":"2025-12-11T08:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.094299 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.094343 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.094514 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.094733 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.158204 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.158242 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.158254 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.158272 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.158284 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:22Z","lastTransitionTime":"2025-12-11T08:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.267249 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.267311 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.267336 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.267365 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.267391 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:22Z","lastTransitionTime":"2025-12-11T08:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.370740 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.370801 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.370823 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.370851 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.370873 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:22Z","lastTransitionTime":"2025-12-11T08:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.473849 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.473909 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.473927 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.473951 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.473969 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:22Z","lastTransitionTime":"2025-12-11T08:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.577046 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.577122 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.577142 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.577169 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.577188 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:22Z","lastTransitionTime":"2025-12-11T08:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.679786 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.679834 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.679845 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.679861 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.679875 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:22Z","lastTransitionTime":"2025-12-11T08:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.784310 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.784764 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.784782 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.784804 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.784816 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:22Z","lastTransitionTime":"2025-12-11T08:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.821591 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.821758 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.821906 4992 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.821995 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:23:30.821900463 +0000 UTC m=+35.081374389 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.822091 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:30.822079538 +0000 UTC m=+35.081553464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.887154 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.887282 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.887371 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.887458 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.887537 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:22Z","lastTransitionTime":"2025-12-11T08:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.922685 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.923042 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:22 crc kubenswrapper[4992]: I1211 08:23:22.923227 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.923000 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.923486 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.923575 4992 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.923739 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:30.923724131 +0000 UTC m=+35.183198057 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.923182 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.924499 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.924583 4992 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.924768 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:30.924756758 +0000 UTC m=+35.184230684 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.923340 4992 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:22 crc kubenswrapper[4992]: E1211 08:23:22.924948 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:30.924937974 +0000 UTC m=+35.184411910 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.036067 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.036180 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.036237 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.036314 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.036385 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.094557 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:23 crc kubenswrapper[4992]: E1211 08:23:23.094749 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.138466 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.138508 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.138517 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.138534 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.138547 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.244462 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.244515 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.244529 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.244548 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.244559 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.347952 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.348140 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.348206 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.348276 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.348333 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.361859 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerStarted","Data":"e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.366518 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.366903 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.386720 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.404670 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.405999 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.417697 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.428779 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.450042 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.451898 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.451927 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.451939 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.451957 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.451972 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.464465 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.479753 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.491134 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.507323 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.519042 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.535111 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.546477 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.555589 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.555829 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.555908 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.556030 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.556135 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.566694 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.587845 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.599988 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.611600 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.633681 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.649568 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.658785 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.658829 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.658840 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.658859 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.658871 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.667516 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.684040 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.699240 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.710315 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.722132 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.742275 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.761393 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.761566 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.761607 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.761618 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.761661 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.761675 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.776723 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.795328 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.812276 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.824626 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.844023 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:23Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.865383 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.865452 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.865468 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.865495 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.865513 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.968244 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.968317 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.968339 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.968369 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:23 crc kubenswrapper[4992]: I1211 08:23:23.968387 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:23Z","lastTransitionTime":"2025-12-11T08:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.070815 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.070879 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.070891 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.070909 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.070921 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:24Z","lastTransitionTime":"2025-12-11T08:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.094212 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:24 crc kubenswrapper[4992]: E1211 08:23:24.094353 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.094215 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:24 crc kubenswrapper[4992]: E1211 08:23:24.094691 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.174571 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.174923 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.175146 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.175494 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.175594 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:24Z","lastTransitionTime":"2025-12-11T08:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.278381 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.278426 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.278439 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.278455 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.278465 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:24Z","lastTransitionTime":"2025-12-11T08:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.373344 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.373986 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.381668 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.381724 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.381741 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.381766 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.381785 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:24Z","lastTransitionTime":"2025-12-11T08:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.401469 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.421239 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.436906 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.453247 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.475557 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.484318 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.484381 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.484393 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.484415 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.484435 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:24Z","lastTransitionTime":"2025-12-11T08:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.491447 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.505040 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.516370 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.527890 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.547478 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.568273 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.585509 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.587415 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.587459 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.587471 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.587492 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.587503 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:24Z","lastTransitionTime":"2025-12-11T08:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.606527 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.638324 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.656259 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.676941 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:24Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.690174 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.690220 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.690232 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.690252 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.690265 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:24Z","lastTransitionTime":"2025-12-11T08:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.793761 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.794141 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.794209 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.794331 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.794395 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:24Z","lastTransitionTime":"2025-12-11T08:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.897038 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.897392 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.897472 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.897570 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:24 crc kubenswrapper[4992]: I1211 08:23:24.897678 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:24Z","lastTransitionTime":"2025-12-11T08:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.001124 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.001181 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.001194 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.001215 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.001227 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.094455 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:25 crc kubenswrapper[4992]: E1211 08:23:25.094672 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.103964 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.104007 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.104022 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.104046 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.104065 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.208557 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.208616 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.208629 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.208675 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.208689 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.311106 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.311143 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.311152 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.311168 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.311178 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.382527 4992 generic.go:334] "Generic (PLEG): container finished" podID="d3be416d-184d-47f9-846a-6304666886fe" containerID="e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262" exitCode=0 Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.382651 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerDied","Data":"e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.383306 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.403912 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.417300 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.417985 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.418305 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.419744 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.420563 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.432500 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.456000 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.475264 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.526740 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.535077 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.535137 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.535154 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.535179 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.535197 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.572958 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.587589 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.599724 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.612366 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.624703 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.637157 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.637682 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.637717 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.637726 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.637741 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.637751 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.666901 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.687560 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.700392 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.712603 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:25Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.747512 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.747556 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.747567 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.747587 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.747597 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.850591 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.850649 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.850665 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.850682 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.850694 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.953011 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.953052 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.953062 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.953080 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:25 crc kubenswrapper[4992]: I1211 08:23:25.953090 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:25Z","lastTransitionTime":"2025-12-11T08:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.056040 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.056086 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.056095 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.056113 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.056123 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.095174 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:26 crc kubenswrapper[4992]: E1211 08:23:26.095296 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.095600 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:26 crc kubenswrapper[4992]: E1211 08:23:26.095704 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.107561 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.123053 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.135875 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.153803 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.164325 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.164369 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.164381 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.164398 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.164409 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.178202 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.192902 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.207681 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.221414 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.235107 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.250920 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.266183 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.267339 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.267373 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.267386 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.267427 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.267441 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.282697 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.304256 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.318292 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.330964 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.369953 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.370011 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.370020 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.370034 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.370043 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.387770 4992 generic.go:334] "Generic (PLEG): container finished" podID="d3be416d-184d-47f9-846a-6304666886fe" containerID="d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719" exitCode=0 Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.387892 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.388486 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerDied","Data":"d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.403934 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.418779 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.435022 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.449265 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.466917 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.473024 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.473079 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.473092 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.473113 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.473127 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.483194 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.495289 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.510053 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.525144 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.540394 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.557371 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.576025 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.576365 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.576433 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.576513 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.576572 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.583627 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.601294 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.616854 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.628042 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:26Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.679341 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.679400 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.679418 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.679444 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.679461 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.782042 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.782105 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.782126 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.782154 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.782174 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.885684 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.885749 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.885768 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.885793 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.885810 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.990461 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.990530 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.990551 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.990579 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:26 crc kubenswrapper[4992]: I1211 08:23:26.990599 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:26Z","lastTransitionTime":"2025-12-11T08:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.093488 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.093547 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.093560 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.093580 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.093592 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.094028 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:27 crc kubenswrapper[4992]: E1211 08:23:27.094257 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.197028 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.197081 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.197093 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.197115 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.197129 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.299777 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.299827 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.299840 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.299860 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.299875 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.395040 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" event={"ID":"d3be416d-184d-47f9-846a-6304666886fe","Type":"ContainerStarted","Data":"974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.397588 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/0.log" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.401313 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.401361 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.401375 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.401390 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.401400 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.401552 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f" exitCode=1 Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.401586 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.402323 4992 scope.go:117] "RemoveContainer" containerID="ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.412253 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.426929 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.444733 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.463082 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.483200 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.498175 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr"] Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.498614 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.500953 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.502117 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.504808 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.505362 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.505650 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.505748 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.505815 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.505870 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.515140 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.540735 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.557735 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.569442 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.577810 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41015a59-be8f-40e9-9315-d4d0179897b1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.577850 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41015a59-be8f-40e9-9315-d4d0179897b1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.577908 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41015a59-be8f-40e9-9315-d4d0179897b1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.577931 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svv4r\" (UniqueName: \"kubernetes.io/projected/41015a59-be8f-40e9-9315-d4d0179897b1-kube-api-access-svv4r\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.585173 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.603233 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.608592 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.608688 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.608699 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.608718 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.608729 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.620813 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.635569 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.666174 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.679062 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41015a59-be8f-40e9-9315-d4d0179897b1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.679151 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svv4r\" (UniqueName: \"kubernetes.io/projected/41015a59-be8f-40e9-9315-d4d0179897b1-kube-api-access-svv4r\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.679188 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41015a59-be8f-40e9-9315-d4d0179897b1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.679229 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41015a59-be8f-40e9-9315-d4d0179897b1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.680094 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/41015a59-be8f-40e9-9315-d4d0179897b1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.680434 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/41015a59-be8f-40e9-9315-d4d0179897b1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.685681 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/41015a59-be8f-40e9-9315-d4d0179897b1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.691399 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.700402 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svv4r\" (UniqueName: \"kubernetes.io/projected/41015a59-be8f-40e9-9315-d4d0179897b1-kube-api-access-svv4r\") pod \"ovnkube-control-plane-749d76644c-q5bwr\" (UID: \"41015a59-be8f-40e9-9315-d4d0179897b1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.705286 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.711309 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.711374 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.711393 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.711417 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.711431 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.720174 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.722197 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.722256 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.722269 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.722290 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.722300 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.732023 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: E1211 08:23:27.739026 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.742780 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.742825 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.742840 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.742861 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.742873 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.749092 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: E1211 08:23:27.760656 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.761258 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.764601 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.764664 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.764678 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.764697 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.764710 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.772478 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: E1211 08:23:27.776841 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.780672 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.780694 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.780701 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.780715 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.780725 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: E1211 08:23:27.793244 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.795054 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\" 6213 factory.go:656] Stopping watch factory\\\\nI1211 08:23:26.979540 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 08:23:26.979543 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:26.979558 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 08:23:26.979544 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 08:23:26.979659 6213 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 08:23:26.979731 6213 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979752 6213 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979685 6213 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979704 6213 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979762 6213 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.798076 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.798099 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.798108 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.798123 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.798133 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.813588 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: E1211 08:23:27.814692 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: E1211 08:23:27.814799 4992 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.816445 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.816467 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.816476 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.816492 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.816501 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.824240 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.828319 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.843230 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: W1211 08:23:27.852406 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41015a59_be8f_40e9_9315_d4d0179897b1.slice/crio-10444f43b5b954fa56a2df16f0280e466e3ece6d61e7bd07cbed40b72920e01f WatchSource:0}: Error finding container 10444f43b5b954fa56a2df16f0280e466e3ece6d61e7bd07cbed40b72920e01f: Status 404 returned error can't find the container with id 10444f43b5b954fa56a2df16f0280e466e3ece6d61e7bd07cbed40b72920e01f Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.854437 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.885944 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.905378 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.918974 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.919017 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.919031 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.919047 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.919059 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:27Z","lastTransitionTime":"2025-12-11T08:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.923183 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:27 crc kubenswrapper[4992]: I1211 08:23:27.936010 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:27Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.020957 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.021000 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.021050 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.021067 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.021119 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.094269 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:28 crc kubenswrapper[4992]: E1211 08:23:28.094402 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.094777 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:28 crc kubenswrapper[4992]: E1211 08:23:28.094833 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.124066 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.124110 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.124118 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.124134 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.124143 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.226525 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.226584 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.226599 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.226623 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.226664 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.329406 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.329457 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.329472 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.329493 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.329508 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.407755 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" event={"ID":"41015a59-be8f-40e9-9315-d4d0179897b1","Type":"ContainerStarted","Data":"10444f43b5b954fa56a2df16f0280e466e3ece6d61e7bd07cbed40b72920e01f"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.410714 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/0.log" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.420656 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.421182 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.432269 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.432346 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.432367 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.432397 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.432416 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.437017 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.453591 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.468557 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.482030 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.496208 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.521017 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\" 6213 factory.go:656] Stopping watch factory\\\\nI1211 08:23:26.979540 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 08:23:26.979543 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:26.979558 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 08:23:26.979544 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 08:23:26.979659 6213 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 08:23:26.979731 6213 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979752 6213 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979685 6213 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979704 6213 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979762 6213 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.532930 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.536119 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.536228 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.536242 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.536260 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.536271 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.550737 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.561968 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.574988 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.588981 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.603139 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.618141 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.639554 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.639598 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.639609 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.639647 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.639672 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.643742 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.667351 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.684787 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:28Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.743126 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.743286 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.743300 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.743317 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.743330 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.847844 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.847900 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.847912 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.847931 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.847942 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.951197 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.951311 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.951330 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.951367 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.951387 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:28Z","lastTransitionTime":"2025-12-11T08:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.989277 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-j68fr"] Dec 11 08:23:28 crc kubenswrapper[4992]: I1211 08:23:28.989886 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:28 crc kubenswrapper[4992]: E1211 08:23:28.989960 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.010428 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.029368 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.045095 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.054264 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.054347 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.054364 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.054384 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.054400 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.064930 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.088044 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.094148 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:29 crc kubenswrapper[4992]: E1211 08:23:29.094286 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.095184 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.095259 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbs6x\" (UniqueName: \"kubernetes.io/projected/1b67a6a3-6d97-4b58-96d9-f0909df30802-kube-api-access-nbs6x\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.105655 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.127193 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.142578 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.157594 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.157716 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.157737 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.157766 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.157785 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.181199 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.197011 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.197139 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbs6x\" (UniqueName: \"kubernetes.io/projected/1b67a6a3-6d97-4b58-96d9-f0909df30802-kube-api-access-nbs6x\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:29 crc kubenswrapper[4992]: E1211 08:23:29.197250 4992 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:29 crc kubenswrapper[4992]: E1211 08:23:29.197406 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs podName:1b67a6a3-6d97-4b58-96d9-f0909df30802 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:29.697379379 +0000 UTC m=+33.956853315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs") pod "network-metrics-daemon-j68fr" (UID: "1b67a6a3-6d97-4b58-96d9-f0909df30802") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.202029 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.218619 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.228537 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbs6x\" (UniqueName: \"kubernetes.io/projected/1b67a6a3-6d97-4b58-96d9-f0909df30802-kube-api-access-nbs6x\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.236910 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.256345 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.259826 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.259865 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.259878 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.259895 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.259908 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.276682 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.291690 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.316268 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\" 6213 factory.go:656] Stopping watch factory\\\\nI1211 08:23:26.979540 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 08:23:26.979543 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:26.979558 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 08:23:26.979544 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 08:23:26.979659 6213 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 08:23:26.979731 6213 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979752 6213 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979685 6213 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979704 6213 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979762 6213 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.329300 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.362482 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.362535 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.362552 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.362572 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.362584 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.426697 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" event={"ID":"41015a59-be8f-40e9-9315-d4d0179897b1","Type":"ContainerStarted","Data":"ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.428720 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/1.log" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.429390 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/0.log" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.432032 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c" exitCode=1 Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.432057 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.432084 4992 scope.go:117] "RemoveContainer" containerID="ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.433616 4992 scope.go:117] "RemoveContainer" containerID="f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c" Dec 11 08:23:29 crc kubenswrapper[4992]: E1211 08:23:29.434030 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.456476 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.465013 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.465056 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.465067 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.465086 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.465098 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.472875 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.487739 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.500194 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.512428 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.524899 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.536854 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.559936 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\" 6213 factory.go:656] Stopping watch factory\\\\nI1211 08:23:26.979540 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 08:23:26.979543 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:26.979558 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 08:23:26.979544 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 08:23:26.979659 6213 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 08:23:26.979731 6213 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979752 6213 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979685 6213 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979704 6213 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979762 6213 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.567557 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.567600 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.567610 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.567644 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.567665 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.576682 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.592521 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.605237 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.620398 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.633138 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.651593 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.664195 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.670418 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.670470 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.670484 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.670504 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.670518 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.679974 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.692324 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:29Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.702997 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:29 crc kubenswrapper[4992]: E1211 08:23:29.703265 4992 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:29 crc kubenswrapper[4992]: E1211 08:23:29.703384 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs podName:1b67a6a3-6d97-4b58-96d9-f0909df30802 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:30.703359889 +0000 UTC m=+34.962833835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs") pod "network-metrics-daemon-j68fr" (UID: "1b67a6a3-6d97-4b58-96d9-f0909df30802") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.772838 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.772906 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.772918 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.772972 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.772986 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.875587 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.875659 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.875671 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.875689 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.875730 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.978422 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.978479 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.978491 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.978511 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:29 crc kubenswrapper[4992]: I1211 08:23:29.978523 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:29Z","lastTransitionTime":"2025-12-11T08:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.036064 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.076824 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.082486 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.082525 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.082535 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.082551 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.082561 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:30Z","lastTransitionTime":"2025-12-11T08:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.094966 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.094991 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:30 crc kubenswrapper[4992]: E1211 08:23:30.095197 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:30 crc kubenswrapper[4992]: E1211 08:23:30.095284 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.100393 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.117266 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.139693 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.180723 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\" 6213 factory.go:656] Stopping watch factory\\\\nI1211 08:23:26.979540 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 08:23:26.979543 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:26.979558 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 08:23:26.979544 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 08:23:26.979659 6213 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 08:23:26.979731 6213 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979752 6213 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979685 6213 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979704 6213 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979762 6213 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.186713 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.186775 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.186791 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.186817 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.186833 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:30Z","lastTransitionTime":"2025-12-11T08:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.199338 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.214618 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.233242 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.252183 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.265423 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.278923 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.290004 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.290050 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.290059 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.290075 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.290084 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:30Z","lastTransitionTime":"2025-12-11T08:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.292505 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.304713 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.321064 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.334308 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.345086 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.364544 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.392661 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.392717 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.392726 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.392747 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.392761 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:30Z","lastTransitionTime":"2025-12-11T08:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.440330 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" event={"ID":"41015a59-be8f-40e9-9315-d4d0179897b1","Type":"ContainerStarted","Data":"9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.448582 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/1.log" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.463305 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.481619 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.496199 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.496408 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.496453 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.496463 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.496484 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.496494 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:30Z","lastTransitionTime":"2025-12-11T08:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.509917 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.524590 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.537234 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.551691 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.569047 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.599651 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.600737 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.600799 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.600814 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.600834 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.600848 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:30Z","lastTransitionTime":"2025-12-11T08:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.617013 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.634274 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.649457 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.671750 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff44008645c00b62d9431a463cb1328b087f3af6a1d4051a84e45cb5cdaed87f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"message\\\":\\\" 6213 factory.go:656] Stopping watch factory\\\\nI1211 08:23:26.979540 6213 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 08:23:26.979543 6213 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:26.979558 6213 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 08:23:26.979544 6213 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 08:23:26.979659 6213 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 08:23:26.979731 6213 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979752 6213 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:26.979685 6213 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979704 6213 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:26.979762 6213 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.686298 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.703171 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.704218 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.704264 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.704275 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.704293 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.704306 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:30Z","lastTransitionTime":"2025-12-11T08:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.712797 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:30 crc kubenswrapper[4992]: E1211 08:23:30.712998 4992 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:30 crc kubenswrapper[4992]: E1211 08:23:30.713096 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs podName:1b67a6a3-6d97-4b58-96d9-f0909df30802 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:32.713078349 +0000 UTC m=+36.972552275 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs") pod "network-metrics-daemon-j68fr" (UID: "1b67a6a3-6d97-4b58-96d9-f0909df30802") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.720721 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.738976 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:30Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.807010 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.807078 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.807092 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.807114 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.807130 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:30Z","lastTransitionTime":"2025-12-11T08:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.910699 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.910774 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.910801 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.910834 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.910862 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:30Z","lastTransitionTime":"2025-12-11T08:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.914403 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:30 crc kubenswrapper[4992]: E1211 08:23:30.914577 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:23:46.914538934 +0000 UTC m=+51.174012860 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:23:30 crc kubenswrapper[4992]: I1211 08:23:30.914675 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:30 crc kubenswrapper[4992]: E1211 08:23:30.914830 4992 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:30 crc kubenswrapper[4992]: E1211 08:23:30.914903 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:46.914885303 +0000 UTC m=+51.174359239 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.015533 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.015663 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.015712 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.015785 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.015854 4992 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.015874 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.015897 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.015937 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.015864 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.015987 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.016010 4992 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.015934 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:47.015912358 +0000 UTC m=+51.275386304 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.015960 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.016141 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:47.016076822 +0000 UTC m=+51.275550898 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.015924 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.016215 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.016236 4992 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.016311 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:47.016303449 +0000 UTC m=+51.275777375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.094835 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.094919 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.094987 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.095123 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.119254 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.119332 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.119350 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.119379 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.119396 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.223080 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.223143 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.223155 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.223176 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.223188 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.326483 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.326623 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.326664 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.326683 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.326697 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.429319 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.429381 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.429394 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.429415 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.429429 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.532701 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.532763 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.532773 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.532798 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.532811 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.636810 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.636918 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.636937 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.636961 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.636979 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.740877 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.740956 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.740966 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.741010 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.741031 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.749101 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.750251 4992 scope.go:117] "RemoveContainer" containerID="f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c" Dec 11 08:23:31 crc kubenswrapper[4992]: E1211 08:23:31.750490 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.765237 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.781100 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.795981 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.811372 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.844309 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.844366 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.844376 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.844396 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.844408 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.845307 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.864545 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.891332 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.903067 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.916516 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.934713 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.947730 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.947765 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.947774 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.947789 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.947797 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:31Z","lastTransitionTime":"2025-12-11T08:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.953791 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:31 crc kubenswrapper[4992]: I1211 08:23:31.969544 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:31.999744 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:31Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.016045 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:32Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.032859 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:32Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.044502 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:32Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.050696 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.050756 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.050773 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.050796 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.050812 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.055191 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:32Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.093995 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:32 crc kubenswrapper[4992]: E1211 08:23:32.094103 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.094222 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:32 crc kubenswrapper[4992]: E1211 08:23:32.094451 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.154037 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.154119 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.154143 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.154175 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.154199 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.258188 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.258254 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.258273 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.258300 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.258320 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.361456 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.361555 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.361579 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.361614 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.361678 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.465991 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.466054 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.466075 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.466094 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.466108 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.568541 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.568595 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.568607 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.568650 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.568666 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.671490 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.671540 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.671552 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.671568 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.671579 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.736484 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:32 crc kubenswrapper[4992]: E1211 08:23:32.736735 4992 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:32 crc kubenswrapper[4992]: E1211 08:23:32.736879 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs podName:1b67a6a3-6d97-4b58-96d9-f0909df30802 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:36.736845575 +0000 UTC m=+40.996319551 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs") pod "network-metrics-daemon-j68fr" (UID: "1b67a6a3-6d97-4b58-96d9-f0909df30802") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.774898 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.774941 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.774949 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.774971 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.774981 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.879258 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.879312 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.879323 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.879341 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.879353 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.981971 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.981999 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.982006 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.982019 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:32 crc kubenswrapper[4992]: I1211 08:23:32.982029 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:32Z","lastTransitionTime":"2025-12-11T08:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.084820 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.084876 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.084886 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.084903 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.084913 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:33Z","lastTransitionTime":"2025-12-11T08:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.094335 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.094342 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:33 crc kubenswrapper[4992]: E1211 08:23:33.094481 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:33 crc kubenswrapper[4992]: E1211 08:23:33.094579 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.187609 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.187668 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.187683 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.187696 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.187705 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:33Z","lastTransitionTime":"2025-12-11T08:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.290995 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.291058 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.291092 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.291112 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.291123 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:33Z","lastTransitionTime":"2025-12-11T08:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.394334 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.394407 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.394421 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.394444 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.394461 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:33Z","lastTransitionTime":"2025-12-11T08:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.499410 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.499576 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.499603 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.499713 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.499788 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:33Z","lastTransitionTime":"2025-12-11T08:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.603029 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.603087 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.603107 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.603128 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.603142 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:33Z","lastTransitionTime":"2025-12-11T08:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.706621 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.706714 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.706726 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.706745 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.706757 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:33Z","lastTransitionTime":"2025-12-11T08:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.810759 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.810833 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.810851 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.810876 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.810899 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:33Z","lastTransitionTime":"2025-12-11T08:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.914382 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.914459 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.914492 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.914523 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:33 crc kubenswrapper[4992]: I1211 08:23:33.914543 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:33Z","lastTransitionTime":"2025-12-11T08:23:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.018778 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.018844 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.018863 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.018890 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.018907 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.095357 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.095438 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:34 crc kubenswrapper[4992]: E1211 08:23:34.095684 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:34 crc kubenswrapper[4992]: E1211 08:23:34.095852 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.122350 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.122425 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.122704 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.122735 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.122769 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.226385 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.226447 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.226465 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.226489 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.226508 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.330116 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.330194 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.330219 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.330273 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.330309 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.433572 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.433618 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.433647 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.433667 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.433681 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.537866 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.537929 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.537944 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.537969 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.537984 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.641000 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.641068 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.641088 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.641114 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.641135 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.744570 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.744617 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.744626 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.744662 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.744674 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.848172 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.848236 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.848247 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.848267 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.848280 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.951753 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.951830 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.951855 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.951889 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:34 crc kubenswrapper[4992]: I1211 08:23:34.951908 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:34Z","lastTransitionTime":"2025-12-11T08:23:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.055184 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.055279 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.055304 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.055338 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.055363 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.095001 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.095023 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:35 crc kubenswrapper[4992]: E1211 08:23:35.095202 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:35 crc kubenswrapper[4992]: E1211 08:23:35.095371 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.159088 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.159219 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.159239 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.159291 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.159320 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.262727 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.262783 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.262800 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.263742 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.263933 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.368421 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.368463 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.368473 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.368492 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.368505 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.472745 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.472792 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.472803 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.472821 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.472834 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.576017 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.576065 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.576076 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.576097 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.576173 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.679032 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.679086 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.679097 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.679117 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.679131 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.783626 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.783750 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.783773 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.783809 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.783831 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.887049 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.887100 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.887108 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.887126 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.887135 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.992510 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.992577 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.992592 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.992618 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:35 crc kubenswrapper[4992]: I1211 08:23:35.992726 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:35Z","lastTransitionTime":"2025-12-11T08:23:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.094259 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.094273 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:36 crc kubenswrapper[4992]: E1211 08:23:36.094447 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:36 crc kubenswrapper[4992]: E1211 08:23:36.094715 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.096214 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.096268 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.096289 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.096318 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.096339 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:36Z","lastTransitionTime":"2025-12-11T08:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.112159 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.134002 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.157932 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.177249 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.194964 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.199211 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.199458 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.200289 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.200417 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.200527 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:36Z","lastTransitionTime":"2025-12-11T08:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.212479 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.242490 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.262911 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.279010 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.294676 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.302988 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.303175 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.303235 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.303345 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.303410 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:36Z","lastTransitionTime":"2025-12-11T08:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.314816 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.331099 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.346967 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.370930 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.386974 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.400987 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.406544 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.406587 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.406598 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.406617 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.406654 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:36Z","lastTransitionTime":"2025-12-11T08:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.416774 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:36Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.509901 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.509970 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.509993 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.510021 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.510040 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:36Z","lastTransitionTime":"2025-12-11T08:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.613078 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.613142 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.613155 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.613179 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.613193 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:36Z","lastTransitionTime":"2025-12-11T08:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.715429 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.715466 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.715476 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.715490 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.715500 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:36Z","lastTransitionTime":"2025-12-11T08:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.788084 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:36 crc kubenswrapper[4992]: E1211 08:23:36.788260 4992 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:36 crc kubenswrapper[4992]: E1211 08:23:36.788356 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs podName:1b67a6a3-6d97-4b58-96d9-f0909df30802 nodeName:}" failed. No retries permitted until 2025-12-11 08:23:44.788338556 +0000 UTC m=+49.047812482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs") pod "network-metrics-daemon-j68fr" (UID: "1b67a6a3-6d97-4b58-96d9-f0909df30802") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.819568 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.819625 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.819664 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.819683 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.819696 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:36Z","lastTransitionTime":"2025-12-11T08:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.922785 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.922878 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.922898 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.922923 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:36 crc kubenswrapper[4992]: I1211 08:23:36.922941 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:36Z","lastTransitionTime":"2025-12-11T08:23:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.026191 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.026242 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.026255 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.026273 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.026300 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.094389 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.094564 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:37 crc kubenswrapper[4992]: E1211 08:23:37.094691 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:37 crc kubenswrapper[4992]: E1211 08:23:37.094862 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.130398 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.130480 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.130499 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.130526 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.130551 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.234572 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.234660 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.234680 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.234702 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.234714 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.338528 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.338589 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.338603 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.338625 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.338671 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.442433 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.442500 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.442510 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.442536 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.442549 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.545692 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.545736 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.545749 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.545765 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.545777 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.649764 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.649841 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.649860 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.649889 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.649908 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.753582 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.753732 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.753769 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.753801 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.753820 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.856965 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.857035 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.857073 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.857107 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.857130 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.951152 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.951222 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.951238 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.951274 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.951291 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: E1211 08:23:37.969532 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:37Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.976175 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.976238 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.976251 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.976275 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:37 crc kubenswrapper[4992]: I1211 08:23:37.976293 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:37Z","lastTransitionTime":"2025-12-11T08:23:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:37 crc kubenswrapper[4992]: E1211 08:23:37.998551 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:37Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.004090 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.004165 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.004190 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.004218 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.004235 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: E1211 08:23:38.025205 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:38Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.030609 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.030687 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.030704 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.030731 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.030745 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: E1211 08:23:38.052394 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:38Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.058497 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.058563 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.058579 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.058609 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.058626 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: E1211 08:23:38.077388 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:38Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:38 crc kubenswrapper[4992]: E1211 08:23:38.077570 4992 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.080244 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.080293 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.080307 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.080329 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.080343 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.094904 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.095052 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:38 crc kubenswrapper[4992]: E1211 08:23:38.095372 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:38 crc kubenswrapper[4992]: E1211 08:23:38.095604 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.183351 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.183401 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.183411 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.183430 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.183442 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.286834 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.286869 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.286877 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.286892 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.286901 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.390546 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.390606 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.390618 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.390667 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.390679 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.493609 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.493719 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.493739 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.493801 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.493821 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.597689 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.597721 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.597730 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.597936 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.597946 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.700810 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.700865 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.700883 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.700908 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.700926 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.803877 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.803927 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.803942 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.803964 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.803993 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.907789 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.908174 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.908307 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.908433 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:38 crc kubenswrapper[4992]: I1211 08:23:38.908552 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:38Z","lastTransitionTime":"2025-12-11T08:23:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.011661 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.012024 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.012158 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.012334 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.012489 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.094428 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:39 crc kubenswrapper[4992]: E1211 08:23:39.094734 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.094427 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:39 crc kubenswrapper[4992]: E1211 08:23:39.094894 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.115846 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.115904 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.115920 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.115946 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.115963 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.219063 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.219132 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.219157 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.219187 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.219210 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.323570 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.323656 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.323673 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.323698 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.323717 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.426959 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.427022 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.427033 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.427052 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.427065 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.529485 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.529543 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.529557 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.529578 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.529590 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.632290 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.632356 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.632374 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.632398 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.632419 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.736148 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.736230 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.736247 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.736274 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.736292 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.840371 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.840455 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.840487 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.840520 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.840542 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.944738 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.944817 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.944841 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.944871 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:39 crc kubenswrapper[4992]: I1211 08:23:39.944891 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:39Z","lastTransitionTime":"2025-12-11T08:23:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.049748 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.049820 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.049859 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.049896 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.049920 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.095107 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.095210 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:40 crc kubenswrapper[4992]: E1211 08:23:40.095360 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:40 crc kubenswrapper[4992]: E1211 08:23:40.095685 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.159199 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.159281 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.159317 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.159350 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.159372 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.263479 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.263547 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.263568 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.263597 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.263620 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.367426 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.367481 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.367491 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.367513 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.367526 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.471694 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.471758 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.471768 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.471791 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.471802 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.575163 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.575212 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.575225 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.575245 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.575258 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.678091 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.678155 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.678172 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.678195 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.678213 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.782110 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.782246 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.782324 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.782400 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.782428 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.886362 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.886432 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.886455 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.886483 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.886505 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.988962 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.989019 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.989034 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.989053 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:40 crc kubenswrapper[4992]: I1211 08:23:40.989068 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:40Z","lastTransitionTime":"2025-12-11T08:23:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.092429 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.092489 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.092511 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.092531 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.092547 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:41Z","lastTransitionTime":"2025-12-11T08:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.094889 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.094910 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:41 crc kubenswrapper[4992]: E1211 08:23:41.095052 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:41 crc kubenswrapper[4992]: E1211 08:23:41.095223 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.196108 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.196157 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.196176 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.196223 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.196249 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:41Z","lastTransitionTime":"2025-12-11T08:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.300106 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.300169 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.300187 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.300212 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.300231 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:41Z","lastTransitionTime":"2025-12-11T08:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.404139 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.404232 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.404258 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.404289 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.404317 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:41Z","lastTransitionTime":"2025-12-11T08:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.507134 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.507169 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.507177 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.507190 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.507198 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:41Z","lastTransitionTime":"2025-12-11T08:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.611677 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.611733 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.611757 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.611787 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.611809 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:41Z","lastTransitionTime":"2025-12-11T08:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.716264 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.716313 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.716333 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.716356 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.716372 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:41Z","lastTransitionTime":"2025-12-11T08:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.820221 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.820278 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.820295 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.820322 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.820340 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:41Z","lastTransitionTime":"2025-12-11T08:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.922930 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.923012 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.923030 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.923062 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:41 crc kubenswrapper[4992]: I1211 08:23:41.923079 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:41Z","lastTransitionTime":"2025-12-11T08:23:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.026727 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.026776 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.026787 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.026828 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.026841 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.094352 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:42 crc kubenswrapper[4992]: E1211 08:23:42.094596 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.094754 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:42 crc kubenswrapper[4992]: E1211 08:23:42.095101 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.129899 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.129969 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.129994 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.130019 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.130037 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.232756 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.232807 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.232823 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.232845 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.232862 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.336222 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.336264 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.336279 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.336324 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.336338 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.439365 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.439751 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.439838 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.440240 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.440414 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.544131 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.544237 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.544259 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.544338 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.544361 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.646930 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.647249 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.647331 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.647460 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.647537 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.751977 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.752039 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.752057 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.752083 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.752102 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.835395 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.848829 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.855676 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.855905 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.856031 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.856184 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.856395 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.856401 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:42Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.878560 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:42Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.894802 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:42Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.928006 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:42Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.945578 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:42Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.959948 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.959990 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.960002 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.960016 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.960025 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:42Z","lastTransitionTime":"2025-12-11T08:23:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.965240 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:42Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:42 crc kubenswrapper[4992]: I1211 08:23:42.987808 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:42Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.007208 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.022163 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.041872 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.057167 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.062440 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.062591 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.062692 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.062805 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.062885 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.075537 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.091398 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.094529 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.094592 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:43 crc kubenswrapper[4992]: E1211 08:23:43.094783 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:43 crc kubenswrapper[4992]: E1211 08:23:43.094897 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.105464 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.130845 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.146418 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.166343 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.166409 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.166424 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.166452 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.166470 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.173878 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:43Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.269348 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.269394 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.269404 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.269422 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.269434 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.372206 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.372251 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.372263 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.372284 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.372297 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.475250 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.475293 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.475304 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.475324 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.475339 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.578087 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.578129 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.578138 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.578158 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.578170 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.681515 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.681941 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.682014 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.682106 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.682165 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.784731 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.784787 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.784795 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.784815 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.784827 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.887603 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.888188 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.888328 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.888454 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.888571 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.991830 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.991899 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.991919 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.991958 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:43 crc kubenswrapper[4992]: I1211 08:23:43.991997 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:43Z","lastTransitionTime":"2025-12-11T08:23:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.094036 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.094150 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:44 crc kubenswrapper[4992]: E1211 08:23:44.094208 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:44 crc kubenswrapper[4992]: E1211 08:23:44.094326 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.096265 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.096322 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.096346 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.096379 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.096398 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:44Z","lastTransitionTime":"2025-12-11T08:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.199847 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.200406 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.200427 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.200457 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.200475 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:44Z","lastTransitionTime":"2025-12-11T08:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.304047 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.304459 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.304883 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.305203 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.305530 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:44Z","lastTransitionTime":"2025-12-11T08:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.409146 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.409212 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.409232 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.409260 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.409281 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:44Z","lastTransitionTime":"2025-12-11T08:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.511518 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.511552 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.511563 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.511580 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.511592 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:44Z","lastTransitionTime":"2025-12-11T08:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.615359 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.615437 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.615455 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.615488 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.615515 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:44Z","lastTransitionTime":"2025-12-11T08:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.718907 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.719158 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.719174 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.719195 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.719211 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:44Z","lastTransitionTime":"2025-12-11T08:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.823306 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.823371 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.823389 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.823415 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.823432 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:44Z","lastTransitionTime":"2025-12-11T08:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:44 crc kubenswrapper[4992]: E1211 08:23:44.881817 4992 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:44 crc kubenswrapper[4992]: E1211 08:23:44.881967 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs podName:1b67a6a3-6d97-4b58-96d9-f0909df30802 nodeName:}" failed. No retries permitted until 2025-12-11 08:24:00.881933134 +0000 UTC m=+65.141407100 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs") pod "network-metrics-daemon-j68fr" (UID: "1b67a6a3-6d97-4b58-96d9-f0909df30802") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.882423 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.927436 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.927505 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.927517 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.927536 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:44 crc kubenswrapper[4992]: I1211 08:23:44.927548 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:44Z","lastTransitionTime":"2025-12-11T08:23:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.033118 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.033186 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.033210 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.033243 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.033266 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.094404 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:45 crc kubenswrapper[4992]: E1211 08:23:45.094585 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.094987 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:45 crc kubenswrapper[4992]: E1211 08:23:45.095213 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.136829 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.136899 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.136920 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.136950 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.136973 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.240536 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.240617 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.240671 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.240703 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.240721 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.344714 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.344789 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.344808 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.344837 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.344860 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.448808 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.448890 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.448916 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.448948 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.448975 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.552122 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.552178 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.552190 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.552211 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.552227 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.655417 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.655460 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.655471 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.655487 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.655498 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.758268 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.758309 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.758321 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.758342 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.758355 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.862178 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.862249 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.862308 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.862341 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.862362 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.966173 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.966234 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.966245 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.966264 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:45 crc kubenswrapper[4992]: I1211 08:23:45.966277 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:45Z","lastTransitionTime":"2025-12-11T08:23:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.070149 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.070211 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.070229 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.070254 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.070272 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:46Z","lastTransitionTime":"2025-12-11T08:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.094061 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:46 crc kubenswrapper[4992]: E1211 08:23:46.094239 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.094316 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:46 crc kubenswrapper[4992]: E1211 08:23:46.095052 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.095678 4992 scope.go:117] "RemoveContainer" containerID="f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.124326 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.171510 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.176335 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.176397 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.176414 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.176441 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.176461 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:46Z","lastTransitionTime":"2025-12-11T08:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.214646 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.232573 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.248552 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.264883 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.281869 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.281994 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.282030 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.282040 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.282054 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.282064 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:46Z","lastTransitionTime":"2025-12-11T08:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.310966 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.326364 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.341163 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.355956 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.365846 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.377273 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.384825 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.384868 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.384882 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.384899 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.384911 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:46Z","lastTransitionTime":"2025-12-11T08:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.389398 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.401526 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.412196 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.424457 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.437646 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.488478 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.488532 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.488543 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.488562 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.488575 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:46Z","lastTransitionTime":"2025-12-11T08:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.519911 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/1.log" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.521825 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.523051 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.548135 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.563428 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.578337 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.591035 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.591316 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.591401 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.591484 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.591556 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:46Z","lastTransitionTime":"2025-12-11T08:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.596082 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.613236 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.660977 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.673887 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.684331 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.694557 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.694603 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.694613 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.694646 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.694658 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:46Z","lastTransitionTime":"2025-12-11T08:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.708554 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.721592 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.734010 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.746410 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.756808 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.765803 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.774700 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.783740 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.791718 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.797484 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.797529 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.797539 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.797555 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.797565 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:46Z","lastTransitionTime":"2025-12-11T08:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.799953 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:46Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.900855 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.900928 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.900947 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.900982 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:46 crc kubenswrapper[4992]: I1211 08:23:46.901001 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:46Z","lastTransitionTime":"2025-12-11T08:23:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.004802 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.004856 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.004875 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.004903 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.004922 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.108060 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.108090 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.108100 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.108115 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.108126 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.179410 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.179545 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.179716 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.179805 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.179828 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.179924 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.179971 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.180040 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180263 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:24:19.180227643 +0000 UTC m=+83.439701609 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180338 4992 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.180345 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180405 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:24:19.180395117 +0000 UTC m=+83.439869053 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180400 4992 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180519 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180552 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180577 4992 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180586 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:24:19.180558682 +0000 UTC m=+83.440032638 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180595 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180677 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180709 4992 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180683 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 08:24:19.180622903 +0000 UTC m=+83.440096859 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:47 crc kubenswrapper[4992]: E1211 08:23:47.180828 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 08:24:19.180795438 +0000 UTC m=+83.440269494 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.210853 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.210927 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.210952 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.210983 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.211005 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.314070 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.314117 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.314127 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.314144 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.314154 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.416542 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.416662 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.416690 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.416726 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.416748 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.520326 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.520375 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.520386 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.520408 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.520420 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.623542 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.623585 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.623596 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.623615 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.623644 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.729740 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.729788 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.729799 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.729818 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.729828 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.832795 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.832854 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.832867 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.832888 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.832904 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.935964 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.936027 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.936046 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.936075 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:47 crc kubenswrapper[4992]: I1211 08:23:47.936094 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:47Z","lastTransitionTime":"2025-12-11T08:23:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.039225 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.039289 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.039307 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.039333 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.039350 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.094908 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:48 crc kubenswrapper[4992]: E1211 08:23:48.095088 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.095758 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:48 crc kubenswrapper[4992]: E1211 08:23:48.095868 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.142803 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.142869 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.142887 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.142912 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.142931 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.246226 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.246292 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.246303 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.246325 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.246344 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.349932 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.349998 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.350018 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.350044 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.350064 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.429199 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.429267 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.429278 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.429590 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.429770 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: E1211 08:23:48.454038 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.462043 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.462125 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.462160 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.462218 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.462249 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: E1211 08:23:48.483859 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.489554 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.489923 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.489972 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.490010 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.490368 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: E1211 08:23:48.512035 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.518140 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.518241 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.518276 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.518312 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.518333 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.529700 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/2.log" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.530323 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/1.log" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.535208 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a" exitCode=1 Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.535291 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.535374 4992 scope.go:117] "RemoveContainer" containerID="f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.536585 4992 scope.go:117] "RemoveContainer" containerID="2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a" Dec 11 08:23:48 crc kubenswrapper[4992]: E1211 08:23:48.536892 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" Dec 11 08:23:48 crc kubenswrapper[4992]: E1211 08:23:48.537689 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.542624 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.542697 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.542714 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.542739 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.542757 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.550876 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: E1211 08:23:48.556044 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: E1211 08:23:48.556317 4992 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.562417 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.562474 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.562487 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.562513 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.562530 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.588313 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"t-go/informers/factory.go:160\\\\nI1211 08:23:47.935838 6609 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.935907 6609 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:23:47.936165 6609 factory.go:656] Stopping watch factory\\\\nI1211 08:23:47.936197 6609 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.936243 6609 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:47.936656 6609 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.937055 6609 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.980537 6609 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:23:47.980572 6609 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:23:47.980681 6609 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:23:47.980744 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:23:47.980947 6609 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.602609 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.617175 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.634271 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.650179 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.665213 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.665978 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.666021 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.666033 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.666053 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.666070 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.683494 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.703202 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.715699 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.734216 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.748785 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.765546 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.769140 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.769186 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.769197 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.769220 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.769231 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.781386 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.798866 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.827673 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.852821 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.873799 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.873857 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.873869 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.873889 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.873901 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.875994 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:48Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.976994 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.977064 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.977081 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.977108 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:48 crc kubenswrapper[4992]: I1211 08:23:48.977127 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:48Z","lastTransitionTime":"2025-12-11T08:23:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.080008 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.080096 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.080107 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.080128 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.080143 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:49Z","lastTransitionTime":"2025-12-11T08:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.094459 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.094482 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:49 crc kubenswrapper[4992]: E1211 08:23:49.094749 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:49 crc kubenswrapper[4992]: E1211 08:23:49.094924 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.183751 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.183787 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.183797 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.183820 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.183838 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:49Z","lastTransitionTime":"2025-12-11T08:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.287720 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.287784 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.287797 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.287816 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.287826 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:49Z","lastTransitionTime":"2025-12-11T08:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.392456 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.392517 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.392529 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.392553 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.392569 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:49Z","lastTransitionTime":"2025-12-11T08:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.495869 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.495943 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.495962 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.495989 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.496008 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:49Z","lastTransitionTime":"2025-12-11T08:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.541834 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/2.log" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.599590 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.599654 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.599664 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.599681 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.599690 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:49Z","lastTransitionTime":"2025-12-11T08:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.702308 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.702376 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.702391 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.702417 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.702436 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:49Z","lastTransitionTime":"2025-12-11T08:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.804766 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.804812 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.804822 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.804838 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.804849 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:49Z","lastTransitionTime":"2025-12-11T08:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.908062 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.908135 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.908157 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.908185 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:49 crc kubenswrapper[4992]: I1211 08:23:49.908205 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:49Z","lastTransitionTime":"2025-12-11T08:23:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.011211 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.011263 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.011273 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.011294 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.011305 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.094921 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.095012 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:50 crc kubenswrapper[4992]: E1211 08:23:50.095105 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:50 crc kubenswrapper[4992]: E1211 08:23:50.095224 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.113511 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.113555 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.113564 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.113584 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.113597 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.215947 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.216028 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.216039 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.216080 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.216092 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.319659 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.319706 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.319719 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.319742 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.319753 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.422426 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.422467 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.422478 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.422496 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.422509 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.525951 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.526021 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.526041 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.526071 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.526092 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.629316 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.629355 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.629368 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.629385 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.629398 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.733140 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.733223 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.733243 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.733268 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.733287 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.836688 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.836745 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.836754 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.836774 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.836786 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.940015 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.940073 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.940084 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.940103 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:50 crc kubenswrapper[4992]: I1211 08:23:50.940114 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:50Z","lastTransitionTime":"2025-12-11T08:23:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.043335 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.043444 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.043462 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.043519 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.043540 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.095074 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.095128 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:51 crc kubenswrapper[4992]: E1211 08:23:51.095261 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:51 crc kubenswrapper[4992]: E1211 08:23:51.095695 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.147061 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.147146 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.147171 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.147202 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.147226 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.251530 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.251574 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.251584 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.251601 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.251612 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.353748 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.353808 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.353827 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.353845 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.353860 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.456700 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.456756 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.456765 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.456780 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.456795 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.560682 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.560746 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.560757 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.560786 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.560803 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.664720 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.664785 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.664803 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.664835 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.664858 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.772605 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.772693 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.772705 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.772725 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.772742 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.876803 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.876878 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.876891 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.876914 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.876927 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.979399 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.979434 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.979443 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.979457 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:51 crc kubenswrapper[4992]: I1211 08:23:51.979466 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:51Z","lastTransitionTime":"2025-12-11T08:23:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.082740 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.082812 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.082833 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.082857 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.082877 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:52Z","lastTransitionTime":"2025-12-11T08:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.094294 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:52 crc kubenswrapper[4992]: E1211 08:23:52.094477 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.094610 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:52 crc kubenswrapper[4992]: E1211 08:23:52.094878 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.186241 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.186308 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.186343 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.186467 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.186491 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:52Z","lastTransitionTime":"2025-12-11T08:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.290684 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.290834 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.290921 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.290954 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.290995 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:52Z","lastTransitionTime":"2025-12-11T08:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.394613 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.394725 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.394746 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.394772 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.394791 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:52Z","lastTransitionTime":"2025-12-11T08:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.497679 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.497728 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.497737 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.497757 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.497768 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:52Z","lastTransitionTime":"2025-12-11T08:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.601252 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.601328 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.601354 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.601385 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.601414 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:52Z","lastTransitionTime":"2025-12-11T08:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.706353 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.706438 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.706465 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.706497 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.706521 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:52Z","lastTransitionTime":"2025-12-11T08:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.810189 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.810366 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.810393 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.810431 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.810451 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:52Z","lastTransitionTime":"2025-12-11T08:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.914502 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.914553 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.914565 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.914584 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:52 crc kubenswrapper[4992]: I1211 08:23:52.914596 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:52Z","lastTransitionTime":"2025-12-11T08:23:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.017807 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.017860 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.017871 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.017889 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.017904 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.093983 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.094082 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:53 crc kubenswrapper[4992]: E1211 08:23:53.094235 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:53 crc kubenswrapper[4992]: E1211 08:23:53.094370 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.121263 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.121322 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.121332 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.121355 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.121370 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.224594 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.224657 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.224671 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.224693 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.224704 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.328021 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.328089 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.328106 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.328132 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.328151 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.430517 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.430593 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.430615 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.430672 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.430695 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.533490 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.533545 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.533564 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.533588 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.533605 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.637564 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.638013 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.638161 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.638288 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.638407 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.742041 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.742485 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.742771 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.742996 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.743204 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.847153 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.847220 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.847253 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.847280 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.847302 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.950343 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.950837 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.951054 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.951227 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:53 crc kubenswrapper[4992]: I1211 08:23:53.951408 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:53Z","lastTransitionTime":"2025-12-11T08:23:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.055545 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.055616 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.055681 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.055720 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.055740 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.094998 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:54 crc kubenswrapper[4992]: E1211 08:23:54.095182 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.095527 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:54 crc kubenswrapper[4992]: E1211 08:23:54.095693 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.158522 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.158710 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.158736 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.158813 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.158878 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.262068 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.262121 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.262134 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.262151 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.262164 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.364576 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.364660 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.364671 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.364695 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.364707 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.467322 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.467375 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.467389 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.467408 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.467423 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.570941 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.570993 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.571008 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.571029 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.571045 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.673943 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.674366 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.674910 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.675174 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.675338 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.778581 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.778628 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.778661 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.778699 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.778718 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.881056 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.881103 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.881113 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.881132 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.881144 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.983420 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.983477 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.983490 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.983512 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:54 crc kubenswrapper[4992]: I1211 08:23:54.983529 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:54Z","lastTransitionTime":"2025-12-11T08:23:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.086185 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.086237 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.086250 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.086272 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.086287 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:55Z","lastTransitionTime":"2025-12-11T08:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.094684 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.094836 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:55 crc kubenswrapper[4992]: E1211 08:23:55.094947 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:55 crc kubenswrapper[4992]: E1211 08:23:55.095076 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.190335 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.190412 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.190427 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.190446 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.190458 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:55Z","lastTransitionTime":"2025-12-11T08:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.294400 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.294458 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.294469 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.294495 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.294506 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:55Z","lastTransitionTime":"2025-12-11T08:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.398611 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.398681 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.398692 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.398712 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.398724 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:55Z","lastTransitionTime":"2025-12-11T08:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.502162 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.502250 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.502270 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.502298 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.502318 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:55Z","lastTransitionTime":"2025-12-11T08:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.605249 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.605306 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.605323 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.605345 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.605358 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:55Z","lastTransitionTime":"2025-12-11T08:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.708764 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.708819 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.708828 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.708851 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.708863 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:55Z","lastTransitionTime":"2025-12-11T08:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.811694 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.811777 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.811788 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.811823 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.811841 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:55Z","lastTransitionTime":"2025-12-11T08:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.915340 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.915393 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.915418 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.915444 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:55 crc kubenswrapper[4992]: I1211 08:23:55.915458 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:55Z","lastTransitionTime":"2025-12-11T08:23:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.019315 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.019423 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.019450 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.019484 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.019508 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.094327 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:56 crc kubenswrapper[4992]: E1211 08:23:56.094461 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.094571 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:56 crc kubenswrapper[4992]: E1211 08:23:56.094847 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.117076 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.122657 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.122721 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.122735 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.122758 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.122773 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.131206 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.152275 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.172194 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.196033 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f004e76f2cfe869bd4cb0fed18806568b8f1133e8a510bc4681d10e38d9c8e0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:29Z\\\",\\\"message\\\":\\\"il\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382402 6401 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/downloads]} name:Service_openshift-console/downloads_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.213:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {2ead45b3-c313-4fbc-a7bc-2b3c4ffd610c}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 08:23:28.382599 6401 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"t-go/informers/factory.go:160\\\\nI1211 08:23:47.935838 6609 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.935907 6609 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:23:47.936165 6609 factory.go:656] Stopping watch factory\\\\nI1211 08:23:47.936197 6609 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.936243 6609 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:47.936656 6609 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.937055 6609 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.980537 6609 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:23:47.980572 6609 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:23:47.980681 6609 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:23:47.980744 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:23:47.980947 6609 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.210856 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.226351 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.227017 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.227150 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.227264 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.227394 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.229549 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.246617 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.262972 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.275360 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.287401 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.300758 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.313156 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.327425 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.330802 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.330868 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.330886 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.330910 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.330927 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.339028 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.358860 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.376230 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.387711 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:56Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.434684 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.434769 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.434786 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.434811 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.434830 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.538147 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.538199 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.538211 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.538227 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.538240 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.641411 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.641455 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.641464 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.641480 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.641490 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.744686 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.744755 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.744772 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.744803 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.744839 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.848613 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.848672 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.848681 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.848699 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.848714 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.952236 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.952291 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.952306 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.952329 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:56 crc kubenswrapper[4992]: I1211 08:23:56.952342 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:56Z","lastTransitionTime":"2025-12-11T08:23:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.055552 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.055614 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.055665 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.055690 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.055707 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.094425 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.094558 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:57 crc kubenswrapper[4992]: E1211 08:23:57.094591 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:57 crc kubenswrapper[4992]: E1211 08:23:57.094864 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.159323 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.159387 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.159411 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.159441 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.159465 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.262589 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.262662 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.262674 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.262692 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.262706 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.366606 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.366725 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.366750 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.366782 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.366804 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.469656 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.469702 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.469715 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.469734 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.469745 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.572549 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.572603 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.572620 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.572678 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.572746 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.676489 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.676924 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.677048 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.677208 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.677434 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.781013 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.781045 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.781057 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.781073 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.781084 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.884457 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.884519 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.884536 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.884570 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.884583 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.987910 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.987958 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.987970 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.987990 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:57 crc kubenswrapper[4992]: I1211 08:23:57.988003 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:57Z","lastTransitionTime":"2025-12-11T08:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.091043 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.091095 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.091111 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.091134 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.091154 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.096116 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:23:58 crc kubenswrapper[4992]: E1211 08:23:58.096207 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.096323 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:23:58 crc kubenswrapper[4992]: E1211 08:23:58.096373 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.193712 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.193763 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.193782 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.193809 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.193826 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.298270 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.298342 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.298363 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.298392 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.298415 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.401689 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.401737 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.401753 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.401776 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.401793 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.505359 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.505720 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.505736 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.505758 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.505775 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.609428 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.609473 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.609491 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.609517 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.609534 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.712952 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.713003 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.713020 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.713082 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.713100 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.816757 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.816821 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.816844 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.816871 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.816888 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.827205 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.827549 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.827970 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.828057 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.828082 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: E1211 08:23:58.848019 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:58Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.853096 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.853148 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.853170 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.853193 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.853207 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: E1211 08:23:58.874994 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:58Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.880118 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.880188 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.880206 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.880233 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.880253 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: E1211 08:23:58.899973 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:58Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.905039 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.905127 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.905154 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.905191 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.905216 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: E1211 08:23:58.927479 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:58Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.932816 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.932878 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.932900 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.932929 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.932952 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:58 crc kubenswrapper[4992]: E1211 08:23:58.953425 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:23:58Z is after 2025-08-24T17:21:41Z" Dec 11 08:23:58 crc kubenswrapper[4992]: E1211 08:23:58.953769 4992 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.955884 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.955946 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.955963 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.955991 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:58 crc kubenswrapper[4992]: I1211 08:23:58.956008 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:58Z","lastTransitionTime":"2025-12-11T08:23:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.058832 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.058892 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.058904 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.058924 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.058938 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.094593 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.094677 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:23:59 crc kubenswrapper[4992]: E1211 08:23:59.094829 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:23:59 crc kubenswrapper[4992]: E1211 08:23:59.095021 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.162109 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.162191 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.162204 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.162225 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.162248 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.265913 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.265977 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.265994 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.266018 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.266034 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.368919 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.368965 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.368976 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.368994 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.369006 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.471019 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.471081 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.471091 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.471113 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.471128 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.574403 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.574460 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.574469 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.574495 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.574508 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.677524 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.677572 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.677585 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.677607 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.677618 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.780513 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.780552 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.780563 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.780590 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.780602 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.883703 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.883746 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.883759 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.883776 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.883788 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.985877 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.985920 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.985930 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.985947 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:23:59 crc kubenswrapper[4992]: I1211 08:23:59.985956 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:23:59Z","lastTransitionTime":"2025-12-11T08:23:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.088601 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.088678 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.088691 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.088710 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.088722 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:00Z","lastTransitionTime":"2025-12-11T08:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.094286 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:00 crc kubenswrapper[4992]: E1211 08:24:00.094404 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.094595 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:00 crc kubenswrapper[4992]: E1211 08:24:00.094878 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.191447 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.191486 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.191497 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.191510 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.191519 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:00Z","lastTransitionTime":"2025-12-11T08:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.294587 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.294669 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.294681 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.294702 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.294714 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:00Z","lastTransitionTime":"2025-12-11T08:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.398052 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.398124 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.398142 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.398169 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.398188 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:00Z","lastTransitionTime":"2025-12-11T08:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.501403 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.501469 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.501488 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.501517 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.501532 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:00Z","lastTransitionTime":"2025-12-11T08:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.603851 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.603904 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.603915 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.603940 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.603953 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:00Z","lastTransitionTime":"2025-12-11T08:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.707604 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.707681 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.707700 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.707729 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.707744 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:00Z","lastTransitionTime":"2025-12-11T08:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.810425 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.810496 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.810509 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.810527 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.810537 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:00Z","lastTransitionTime":"2025-12-11T08:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.904727 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:00 crc kubenswrapper[4992]: E1211 08:24:00.904927 4992 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:24:00 crc kubenswrapper[4992]: E1211 08:24:00.905016 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs podName:1b67a6a3-6d97-4b58-96d9-f0909df30802 nodeName:}" failed. No retries permitted until 2025-12-11 08:24:32.90499412 +0000 UTC m=+97.164468066 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs") pod "network-metrics-daemon-j68fr" (UID: "1b67a6a3-6d97-4b58-96d9-f0909df30802") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.913619 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.913676 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.913687 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.913706 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:00 crc kubenswrapper[4992]: I1211 08:24:00.913714 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:00Z","lastTransitionTime":"2025-12-11T08:24:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.016739 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.016785 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.016794 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.016809 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.016821 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.094781 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:01 crc kubenswrapper[4992]: E1211 08:24:01.094924 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.094782 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:01 crc kubenswrapper[4992]: E1211 08:24:01.094991 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.120173 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.120316 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.120337 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.120364 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.120422 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.223537 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.223599 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.223611 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.223627 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.223677 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.326853 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.326906 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.326920 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.326941 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.326955 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.429896 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.429965 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.429981 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.430006 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.430023 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.532908 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.532960 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.532972 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.532989 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.533001 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.635580 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.635622 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.635653 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.635672 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.635683 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.738965 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.739009 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.739020 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.739040 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.739051 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.842124 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.842171 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.842181 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.842198 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.842208 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.945393 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.945503 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.945527 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.945552 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:01 crc kubenswrapper[4992]: I1211 08:24:01.945572 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:01Z","lastTransitionTime":"2025-12-11T08:24:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.048366 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.048414 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.048425 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.048451 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.048466 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.094229 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.094240 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:02 crc kubenswrapper[4992]: E1211 08:24:02.094575 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:02 crc kubenswrapper[4992]: E1211 08:24:02.094906 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.094985 4992 scope.go:117] "RemoveContainer" containerID="2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a" Dec 11 08:24:02 crc kubenswrapper[4992]: E1211 08:24:02.095290 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.120561 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.136536 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.151923 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.151985 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.152000 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.152021 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.152031 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.155147 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.171468 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.187616 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.200891 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.211584 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.233464 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"t-go/informers/factory.go:160\\\\nI1211 08:23:47.935838 6609 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.935907 6609 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:23:47.936165 6609 factory.go:656] Stopping watch factory\\\\nI1211 08:23:47.936197 6609 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.936243 6609 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:47.936656 6609 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.937055 6609 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.980537 6609 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:23:47.980572 6609 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:23:47.980681 6609 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:23:47.980744 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:23:47.980947 6609 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.244210 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.255317 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.255357 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.255369 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.255385 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.255397 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.256253 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.271645 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.281171 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.290778 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.301976 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.313912 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.326393 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.338713 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.349090 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:02Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.357889 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.357930 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.357947 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.357965 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.357976 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.460776 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.460839 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.460847 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.460863 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.460872 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.563158 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.563233 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.563251 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.563276 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.563293 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.666214 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.666275 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.666290 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.666312 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.666327 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.769598 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.769663 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.769683 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.769702 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.769715 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.873010 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.873090 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.873118 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.873147 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.873170 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.978118 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.978176 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.978187 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.978207 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:02 crc kubenswrapper[4992]: I1211 08:24:02.978221 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:02Z","lastTransitionTime":"2025-12-11T08:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.080970 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.081024 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.081035 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.081054 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.081068 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:03Z","lastTransitionTime":"2025-12-11T08:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.094600 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.094670 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:03 crc kubenswrapper[4992]: E1211 08:24:03.094798 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:03 crc kubenswrapper[4992]: E1211 08:24:03.095205 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.184440 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.184490 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.184500 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.184518 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.184532 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:03Z","lastTransitionTime":"2025-12-11T08:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.287400 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.287450 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.287490 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.287510 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.287550 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:03Z","lastTransitionTime":"2025-12-11T08:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.390167 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.390218 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.390230 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.390252 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.390264 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:03Z","lastTransitionTime":"2025-12-11T08:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.492746 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.492799 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.492815 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.492833 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.492847 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:03Z","lastTransitionTime":"2025-12-11T08:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.594775 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.594829 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.594840 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.594859 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.594872 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:03Z","lastTransitionTime":"2025-12-11T08:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.698186 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.698219 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.698232 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.698251 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.698263 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:03Z","lastTransitionTime":"2025-12-11T08:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.801781 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.801883 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.801901 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.801962 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.801983 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:03Z","lastTransitionTime":"2025-12-11T08:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.904486 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.904558 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.904579 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.904603 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:03 crc kubenswrapper[4992]: I1211 08:24:03.904621 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:03Z","lastTransitionTime":"2025-12-11T08:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.007412 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.007484 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.007503 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.007528 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.007550 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.094908 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.094974 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:04 crc kubenswrapper[4992]: E1211 08:24:04.095117 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:04 crc kubenswrapper[4992]: E1211 08:24:04.095228 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.109247 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.109309 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.109327 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.109352 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.109370 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.211923 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.211967 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.211978 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.211994 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.212004 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.314818 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.314855 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.314865 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.314879 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.314888 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.417589 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.417716 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.417743 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.417774 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.417799 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.520668 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.520714 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.520726 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.520743 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.520755 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.620828 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/0.log" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.620897 4992 generic.go:334] "Generic (PLEG): container finished" podID="5838adfc-502f-44ac-be33-14f964497c4f" containerID="59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294" exitCode=1 Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.620933 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lglcz" event={"ID":"5838adfc-502f-44ac-be33-14f964497c4f","Type":"ContainerDied","Data":"59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.621455 4992 scope.go:117] "RemoveContainer" containerID="59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.623231 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.623315 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.623339 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.623371 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.623393 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.632219 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.644403 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.657542 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:24:03Z\\\",\\\"message\\\":\\\"2025-12-11T08:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f\\\\n2025-12-11T08:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f to /host/opt/cni/bin/\\\\n2025-12-11T08:23:18Z [verbose] multus-daemon started\\\\n2025-12-11T08:23:18Z [verbose] Readiness Indicator file check\\\\n2025-12-11T08:24:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.667893 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.679731 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.692820 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.702420 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.711975 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.724032 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.726466 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.726491 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.726500 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.726517 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.726528 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.743477 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.759919 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.771342 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.785867 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.805263 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"t-go/informers/factory.go:160\\\\nI1211 08:23:47.935838 6609 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.935907 6609 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:23:47.936165 6609 factory.go:656] Stopping watch factory\\\\nI1211 08:23:47.936197 6609 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.936243 6609 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:47.936656 6609 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.937055 6609 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.980537 6609 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:23:47.980572 6609 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:23:47.980681 6609 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:23:47.980744 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:23:47.980947 6609 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.818912 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.829100 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.829150 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.829163 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.829179 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.829191 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.830544 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.845990 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.862374 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:04Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.931397 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.931452 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.931462 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.931477 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:04 crc kubenswrapper[4992]: I1211 08:24:04.931491 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:04Z","lastTransitionTime":"2025-12-11T08:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.034821 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.034891 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.034909 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.034936 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.034953 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.094856 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.094856 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:05 crc kubenswrapper[4992]: E1211 08:24:05.095026 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:05 crc kubenswrapper[4992]: E1211 08:24:05.095176 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.138857 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.138912 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.138929 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.138948 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.138966 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.241257 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.241294 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.241306 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.241321 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.241330 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.343352 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.343435 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.343470 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.343490 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.343502 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.446134 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.446180 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.446191 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.446208 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.446222 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.549394 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.549452 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.549470 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.549495 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.549514 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.625869 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/0.log" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.625940 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lglcz" event={"ID":"5838adfc-502f-44ac-be33-14f964497c4f","Type":"ContainerStarted","Data":"04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.641622 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.651688 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.651726 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.651737 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.651753 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.651763 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.666374 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:24:03Z\\\",\\\"message\\\":\\\"2025-12-11T08:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f\\\\n2025-12-11T08:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f to /host/opt/cni/bin/\\\\n2025-12-11T08:23:18Z [verbose] multus-daemon started\\\\n2025-12-11T08:23:18Z [verbose] Readiness Indicator file check\\\\n2025-12-11T08:24:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.677522 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.689558 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.698902 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.710477 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.731087 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.749309 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.757776 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.757854 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.758180 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.758210 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.758473 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.767125 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.786332 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.805997 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.822316 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.835129 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.847623 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.859411 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.861435 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.861475 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.861488 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.861505 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.861516 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.873779 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.886083 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.908773 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"t-go/informers/factory.go:160\\\\nI1211 08:23:47.935838 6609 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.935907 6609 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:23:47.936165 6609 factory.go:656] Stopping watch factory\\\\nI1211 08:23:47.936197 6609 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.936243 6609 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:47.936656 6609 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.937055 6609 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.980537 6609 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:23:47.980572 6609 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:23:47.980681 6609 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:23:47.980744 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:23:47.980947 6609 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:05Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.964219 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.964261 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.964270 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.964283 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:05 crc kubenswrapper[4992]: I1211 08:24:05.964291 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:05Z","lastTransitionTime":"2025-12-11T08:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.068001 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.068048 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.068065 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.068088 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.068107 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.100462 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:06 crc kubenswrapper[4992]: E1211 08:24:06.100842 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.102696 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:06 crc kubenswrapper[4992]: E1211 08:24:06.102868 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.118699 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.133211 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.148687 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.160488 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.169745 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.169786 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.169847 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.169864 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.169877 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.179040 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"t-go/informers/factory.go:160\\\\nI1211 08:23:47.935838 6609 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.935907 6609 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:23:47.936165 6609 factory.go:656] Stopping watch factory\\\\nI1211 08:23:47.936197 6609 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.936243 6609 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:47.936656 6609 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.937055 6609 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.980537 6609 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:23:47.980572 6609 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:23:47.980681 6609 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:23:47.980744 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:23:47.980947 6609 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.194705 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.211662 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.228776 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:24:03Z\\\",\\\"message\\\":\\\"2025-12-11T08:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f\\\\n2025-12-11T08:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f to /host/opt/cni/bin/\\\\n2025-12-11T08:23:18Z [verbose] multus-daemon started\\\\n2025-12-11T08:23:18Z [verbose] Readiness Indicator file check\\\\n2025-12-11T08:24:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.241508 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.256566 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.272497 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.272556 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.272571 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.272595 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.272611 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.280544 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.302110 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.318254 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.332926 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.343471 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.364167 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.375797 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.375847 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.375858 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.375878 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.375889 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.384075 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.398281 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:06Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.478732 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.478780 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.478792 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.478811 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.478823 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.581013 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.581069 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.581082 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.581102 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.581114 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.683843 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.683890 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.683900 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.683918 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.683928 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.785999 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.786078 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.786093 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.786117 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.786130 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.888478 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.888537 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.888548 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.888569 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.888585 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.991969 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.992060 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.992087 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.992116 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:06 crc kubenswrapper[4992]: I1211 08:24:06.992134 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:06Z","lastTransitionTime":"2025-12-11T08:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.094248 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:07 crc kubenswrapper[4992]: E1211 08:24:07.094387 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.094323 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:07 crc kubenswrapper[4992]: E1211 08:24:07.094665 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.094688 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.094740 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.094755 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.094778 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.094793 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:07Z","lastTransitionTime":"2025-12-11T08:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.198051 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.198119 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.198130 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.198149 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.198162 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:07Z","lastTransitionTime":"2025-12-11T08:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.300798 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.300833 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.300842 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.300855 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.300864 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:07Z","lastTransitionTime":"2025-12-11T08:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.403294 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.403347 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.403359 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.403376 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.403387 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:07Z","lastTransitionTime":"2025-12-11T08:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.506249 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.506305 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.506320 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.506344 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.506357 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:07Z","lastTransitionTime":"2025-12-11T08:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.610744 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.610813 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.610831 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.610856 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.610875 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:07Z","lastTransitionTime":"2025-12-11T08:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.713420 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.713491 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.713506 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.713527 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.713537 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:07Z","lastTransitionTime":"2025-12-11T08:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.817247 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.817309 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.817322 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.817343 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.817357 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:07Z","lastTransitionTime":"2025-12-11T08:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.919409 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.919461 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.919475 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.919498 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:07 crc kubenswrapper[4992]: I1211 08:24:07.919510 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:07Z","lastTransitionTime":"2025-12-11T08:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.022025 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.022081 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.022091 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.022117 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.022129 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.094975 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.095043 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:08 crc kubenswrapper[4992]: E1211 08:24:08.095159 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:08 crc kubenswrapper[4992]: E1211 08:24:08.095267 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.125660 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.125704 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.125715 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.125737 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.125752 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.228889 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.228944 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.228955 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.228971 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.228982 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.332231 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.332287 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.332300 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.332319 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.332331 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.435564 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.435679 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.435707 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.435734 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.435757 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.539088 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.539127 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.539136 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.539152 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.539161 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.640983 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.641031 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.641044 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.641062 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.641076 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.743502 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.743561 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.743578 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.743600 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.743620 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.846618 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.846680 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.846693 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.846707 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.846720 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.950361 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.950415 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.950424 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.950445 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:08 crc kubenswrapper[4992]: I1211 08:24:08.950460 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:08Z","lastTransitionTime":"2025-12-11T08:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.053425 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.053477 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.053486 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.053501 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.053513 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.094278 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.094336 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:09 crc kubenswrapper[4992]: E1211 08:24:09.094490 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:09 crc kubenswrapper[4992]: E1211 08:24:09.094693 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.156897 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.156971 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.156983 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.157009 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.157024 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.198753 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.198842 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.198856 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.198876 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.198890 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: E1211 08:24:09.214107 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:09Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.220020 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.220072 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.220087 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.220106 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.220118 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: E1211 08:24:09.234759 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:09Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.239743 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.239786 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.239800 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.239821 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.239832 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: E1211 08:24:09.255600 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:09Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.260750 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.260790 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.260804 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.260827 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.260841 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: E1211 08:24:09.277095 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:09Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.281757 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.281809 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.281826 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.281849 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.281866 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: E1211 08:24:09.296677 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:09Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:09 crc kubenswrapper[4992]: E1211 08:24:09.296836 4992 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.298353 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.298379 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.298390 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.298408 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.298419 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.401703 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.401751 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.401760 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.401779 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.401790 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.503962 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.504017 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.504032 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.504055 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.504071 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.608302 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.608383 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.608400 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.608426 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.608451 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.711744 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.711797 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.711810 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.711833 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.711849 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.814940 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.815023 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.815042 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.815066 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.815083 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.921078 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.921132 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.921142 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.921160 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:09 crc kubenswrapper[4992]: I1211 08:24:09.921169 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:09Z","lastTransitionTime":"2025-12-11T08:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.024017 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.024052 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.024061 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.024081 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.024094 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.094959 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:10 crc kubenswrapper[4992]: E1211 08:24:10.095155 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.095615 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:10 crc kubenswrapper[4992]: E1211 08:24:10.095781 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.127773 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.127811 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.127820 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.127838 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.127849 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.230942 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.230982 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.230994 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.231011 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.231025 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.334780 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.334825 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.334841 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.334866 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.334883 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.437505 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.437548 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.437558 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.437579 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.437594 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.539739 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.539773 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.539780 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.539797 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.539809 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.642374 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.642437 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.642449 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.642468 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.642481 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.744776 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.744818 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.744830 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.744851 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.744864 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.848061 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.848114 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.848133 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.848151 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.848163 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.951627 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.951707 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.951721 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.951744 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:10 crc kubenswrapper[4992]: I1211 08:24:10.951758 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:10Z","lastTransitionTime":"2025-12-11T08:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.054184 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.054219 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.054228 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.054243 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.054254 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.094390 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.094995 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:11 crc kubenswrapper[4992]: E1211 08:24:11.095143 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:11 crc kubenswrapper[4992]: E1211 08:24:11.095305 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.157494 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.157552 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.157569 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.157590 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.157604 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.260296 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.260338 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.260350 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.260372 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.260386 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.362960 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.363000 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.363011 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.363027 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.363039 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.466155 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.466228 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.466250 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.466278 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.466294 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.568861 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.568908 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.568920 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.568939 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.568951 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.671587 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.671662 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.671676 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.671699 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.671712 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.775458 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.775507 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.775522 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.775546 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.775561 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.879605 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.879692 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.879704 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.879754 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.879770 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.983098 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.983162 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.983173 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.983198 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:11 crc kubenswrapper[4992]: I1211 08:24:11.983212 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:11Z","lastTransitionTime":"2025-12-11T08:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.086397 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.086469 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.086493 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.086523 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.086546 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:12Z","lastTransitionTime":"2025-12-11T08:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.094896 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.095032 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:12 crc kubenswrapper[4992]: E1211 08:24:12.095101 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:12 crc kubenswrapper[4992]: E1211 08:24:12.095319 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.189903 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.189974 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.189990 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.190018 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.190042 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:12Z","lastTransitionTime":"2025-12-11T08:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.293290 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.293344 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.293358 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.293379 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.293392 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:12Z","lastTransitionTime":"2025-12-11T08:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.396415 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.396457 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.396466 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.396483 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.396493 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:12Z","lastTransitionTime":"2025-12-11T08:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.499598 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.499684 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.499694 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.499716 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.499729 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:12Z","lastTransitionTime":"2025-12-11T08:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.603033 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.603079 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.603090 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.603109 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.603120 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:12Z","lastTransitionTime":"2025-12-11T08:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.706390 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.706434 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.706443 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.706464 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.706475 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:12Z","lastTransitionTime":"2025-12-11T08:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.809101 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.809163 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.809174 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.809195 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.809207 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:12Z","lastTransitionTime":"2025-12-11T08:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.912461 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.912537 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.912561 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.912593 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:12 crc kubenswrapper[4992]: I1211 08:24:12.912613 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:12Z","lastTransitionTime":"2025-12-11T08:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.015706 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.015784 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.015795 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.015820 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.015836 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.094588 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.094721 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:13 crc kubenswrapper[4992]: E1211 08:24:13.094854 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:13 crc kubenswrapper[4992]: E1211 08:24:13.094996 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.119160 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.119221 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.119240 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.119268 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.119289 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.223344 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.223390 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.223404 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.223423 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.223436 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.326553 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.326614 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.326625 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.326661 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.326683 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.429596 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.429668 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.429677 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.429697 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.429712 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.531737 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.531792 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.531803 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.531824 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.531838 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.634673 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.634712 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.634721 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.634738 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.634750 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.744474 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.744533 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.744545 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.744570 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.744588 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.847827 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.847892 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.847904 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.847929 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.847941 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.951707 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.951788 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.951811 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.951839 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:13 crc kubenswrapper[4992]: I1211 08:24:13.951861 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:13Z","lastTransitionTime":"2025-12-11T08:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.056078 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.056167 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.056187 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.056230 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.056269 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.094877 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:14 crc kubenswrapper[4992]: E1211 08:24:14.095062 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.094880 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:14 crc kubenswrapper[4992]: E1211 08:24:14.095156 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.159341 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.159377 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.159387 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.159405 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.159416 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.263080 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.263170 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.263208 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.263231 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.263244 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.366436 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.366509 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.366525 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.366551 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.366564 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.469266 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.469317 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.469326 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.469349 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.469361 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.572328 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.572380 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.572392 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.572412 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.572427 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.679939 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.679995 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.680011 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.680036 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.680054 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.783309 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.783384 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.783405 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.783431 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.783452 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.886304 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.886384 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.886400 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.886421 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.886435 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.990366 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.990423 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.990436 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.990461 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:14 crc kubenswrapper[4992]: I1211 08:24:14.990476 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:14Z","lastTransitionTime":"2025-12-11T08:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.093742 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.093809 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.093823 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.093847 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.093858 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:15Z","lastTransitionTime":"2025-12-11T08:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.094021 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.094119 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:15 crc kubenswrapper[4992]: E1211 08:24:15.094169 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:15 crc kubenswrapper[4992]: E1211 08:24:15.094278 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.196857 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.196890 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.196898 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.196915 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.196925 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:15Z","lastTransitionTime":"2025-12-11T08:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.299356 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.299403 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.299414 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.299466 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.299480 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:15Z","lastTransitionTime":"2025-12-11T08:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.402140 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.402207 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.402224 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.402247 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.402267 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:15Z","lastTransitionTime":"2025-12-11T08:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.505786 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.505837 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.505850 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.505870 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.505884 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:15Z","lastTransitionTime":"2025-12-11T08:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.609339 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.609388 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.609398 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.609417 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.609428 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:15Z","lastTransitionTime":"2025-12-11T08:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.712349 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.712403 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.712419 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.712443 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.712459 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:15Z","lastTransitionTime":"2025-12-11T08:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.816156 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.816222 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.816236 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.816262 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.816279 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:15Z","lastTransitionTime":"2025-12-11T08:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.919361 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.919447 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.919458 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.919500 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:15 crc kubenswrapper[4992]: I1211 08:24:15.919516 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:15Z","lastTransitionTime":"2025-12-11T08:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.022467 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.022525 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.022567 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.022593 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.022612 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.106590 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:16 crc kubenswrapper[4992]: E1211 08:24:16.106831 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.106886 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.107963 4992 scope.go:117] "RemoveContainer" containerID="2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a" Dec 11 08:24:16 crc kubenswrapper[4992]: E1211 08:24:16.108927 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.124835 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.125288 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.125333 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.125349 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.125371 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.125383 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.140668 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.154528 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.173578 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.191554 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.211954 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.231402 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.231473 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.231485 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.231505 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.231518 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.232528 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.244247 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.260068 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.283150 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"t-go/informers/factory.go:160\\\\nI1211 08:23:47.935838 6609 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.935907 6609 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:23:47.936165 6609 factory.go:656] Stopping watch factory\\\\nI1211 08:23:47.936197 6609 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.936243 6609 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:47.936656 6609 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.937055 6609 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.980537 6609 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:23:47.980572 6609 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:23:47.980681 6609 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:23:47.980744 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:23:47.980947 6609 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.296913 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.312224 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.329357 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.334925 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.334957 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.334966 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.334986 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.334999 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.347849 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.363080 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.379918 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.401667 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:24:03Z\\\",\\\"message\\\":\\\"2025-12-11T08:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f\\\\n2025-12-11T08:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f to /host/opt/cni/bin/\\\\n2025-12-11T08:23:18Z [verbose] multus-daemon started\\\\n2025-12-11T08:23:18Z [verbose] Readiness Indicator file check\\\\n2025-12-11T08:24:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.420126 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:16Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.438989 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.439035 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.439053 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.439084 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.439097 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.541673 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.541770 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.541817 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.541833 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.541844 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.644687 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.644737 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.644747 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.644768 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.644781 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.746805 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.746851 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.746862 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.746881 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.746891 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.849311 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.849356 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.849367 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.849385 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.849394 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.952367 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.952423 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.952432 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.952452 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:16 crc kubenswrapper[4992]: I1211 08:24:16.952464 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:16Z","lastTransitionTime":"2025-12-11T08:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.055750 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.055831 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.055854 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.055885 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.055906 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.094242 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.094325 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:17 crc kubenswrapper[4992]: E1211 08:24:17.094461 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:17 crc kubenswrapper[4992]: E1211 08:24:17.094625 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.160209 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.160281 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.160300 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.160326 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.160344 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.263949 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.263999 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.264011 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.264033 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.264047 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.366835 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.366926 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.366947 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.366982 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.367003 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.470089 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.470135 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.470145 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.470165 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.470182 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.573980 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.574035 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.574048 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.574066 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.574079 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.676252 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.676322 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.676342 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.676370 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.676388 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.780129 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.780172 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.780184 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.780203 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.780216 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.883565 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.883627 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.883704 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.883736 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.883755 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.987010 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.987056 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.987065 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.987090 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:17 crc kubenswrapper[4992]: I1211 08:24:17.987102 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:17Z","lastTransitionTime":"2025-12-11T08:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.091207 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.091252 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.091263 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.091281 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.091297 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:18Z","lastTransitionTime":"2025-12-11T08:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.094620 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:18 crc kubenswrapper[4992]: E1211 08:24:18.094817 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.094877 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:18 crc kubenswrapper[4992]: E1211 08:24:18.095024 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.194530 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.194610 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.194622 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.194669 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.194687 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:18Z","lastTransitionTime":"2025-12-11T08:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.308168 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.308230 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.308243 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.308269 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.308283 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:18Z","lastTransitionTime":"2025-12-11T08:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.411248 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.411310 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.411320 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.411342 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.411359 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:18Z","lastTransitionTime":"2025-12-11T08:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.514312 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.514379 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.514395 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.514425 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.514446 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:18Z","lastTransitionTime":"2025-12-11T08:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.617128 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.617172 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.617183 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.617199 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.617209 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:18Z","lastTransitionTime":"2025-12-11T08:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.709307 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/2.log" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.717490 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.718601 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.718965 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.718994 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.719005 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.719021 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.719032 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:18Z","lastTransitionTime":"2025-12-11T08:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.736447 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.760166 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:24:03Z\\\",\\\"message\\\":\\\"2025-12-11T08:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f\\\\n2025-12-11T08:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f to /host/opt/cni/bin/\\\\n2025-12-11T08:23:18Z [verbose] multus-daemon started\\\\n2025-12-11T08:23:18Z [verbose] Readiness Indicator file check\\\\n2025-12-11T08:24:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.776834 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.790018 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.804918 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.817178 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.831491 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.850139 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.860211 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.860270 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.860293 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.860321 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.860340 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:18Z","lastTransitionTime":"2025-12-11T08:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.865992 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.911354 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.931448 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.948300 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.963835 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.963921 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.963939 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.963973 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.963991 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:18Z","lastTransitionTime":"2025-12-11T08:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:18 crc kubenswrapper[4992]: I1211 08:24:18.988104 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"t-go/informers/factory.go:160\\\\nI1211 08:23:47.935838 6609 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.935907 6609 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:23:47.936165 6609 factory.go:656] Stopping watch factory\\\\nI1211 08:23:47.936197 6609 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.936243 6609 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:47.936656 6609 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.937055 6609 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.980537 6609 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:23:47.980572 6609 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:23:47.980681 6609 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:23:47.980744 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:23:47.980947 6609 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:18Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.043072 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.058116 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.067511 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.067549 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.067558 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.067575 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.067584 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.076021 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.094164 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.094235 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.094399 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.094617 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.105497 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.119322 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.170767 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.170815 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.170839 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.170899 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.170922 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.251828 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.252009 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.252051 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.252079 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.252118 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.252252 4992 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.252266 4992 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.252312 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:25:23.252297601 +0000 UTC m=+147.511771527 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.252411 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 08:25:23.252360122 +0000 UTC m=+147.511834038 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.252529 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.252580 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.252599 4992 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.252721 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 08:25:23.252693181 +0000 UTC m=+147.512167297 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.252915 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:23.252881165 +0000 UTC m=+147.512355091 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.253037 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.253063 4992 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.253078 4992 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.253124 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 08:25:23.253112481 +0000 UTC m=+147.512586407 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.274204 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.274265 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.274279 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.274304 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.274318 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.350905 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.350941 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.350950 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.350966 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.350976 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.365675 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.369566 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.369588 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.369596 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.369610 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.369620 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.381026 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.383624 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.383667 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.383676 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.383688 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.383697 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.396188 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.400965 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.400993 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.401003 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.401020 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.401031 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.413256 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.417531 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.417567 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.417579 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.417600 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.417614 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.431342 4992 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ecae5cb9-c8d1-4572-b769-63cb0d588631\\\",\\\"systemUUID\\\":\\\"ccee3d03-425b-471b-a150-1d5509fbd062\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:19Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:19 crc kubenswrapper[4992]: E1211 08:24:19.431463 4992 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.433339 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.433374 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.433386 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.433407 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.433420 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.536626 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.536708 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.536719 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.536744 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.536758 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.640958 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.641000 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.641013 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.641031 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.641041 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.743881 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.744250 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.744262 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.744276 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.744286 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.847552 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.847593 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.847602 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.847620 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.847666 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.950953 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.951004 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.951020 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.951044 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:19 crc kubenswrapper[4992]: I1211 08:24:19.951063 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:19Z","lastTransitionTime":"2025-12-11T08:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.054494 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.054540 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.054549 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.054567 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.054581 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.094820 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.094871 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:20 crc kubenswrapper[4992]: E1211 08:24:20.095011 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:20 crc kubenswrapper[4992]: E1211 08:24:20.095449 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.158186 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.158237 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.158252 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.158277 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.158293 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.261828 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.261917 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.261936 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.261971 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.261995 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.366139 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.366503 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.366697 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.366992 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.367269 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.470870 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.471224 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.471330 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.471432 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.471526 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.575380 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.575434 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.575447 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.575472 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.575486 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.679175 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.679232 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.679246 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.679266 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.679280 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.728722 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/3.log" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.729629 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/2.log" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.733208 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" exitCode=1 Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.733263 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.733389 4992 scope.go:117] "RemoveContainer" containerID="2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.735078 4992 scope.go:117] "RemoveContainer" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:24:20 crc kubenswrapper[4992]: E1211 08:24:20.735332 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.751245 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa3a4423-6d3f-4489-8e1e-a1b66bc77fec\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a42aff06b96307665a7addd5f62d96aca7f6da7d8bf08eaa9a33a138d665550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6980e9ac355e75f540fe37549892b0467e337ef14bdb964b12c9f0eb2d8edc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc8ed6516a1265160754a30d704649eecaca546f6400d0216bd9d729c546a81b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b91efef6e812b366965e4a355ce77443a796d2d64aa205251664bebdba4de\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.770846 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fff9172a05eb1650d9d6a4b5a5676185b5905ed210a9ca264978d64e855f883\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.782447 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.782488 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.782496 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.782512 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.782526 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.784231 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.800329 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.811654 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bjdzd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32e8bfcd-0901-4994-a7c3-3c33f8a4b67c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16a4176506587c559a1543dfb4832a280d4c6ce96648c80a7cf2addc851ec494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4ccj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bjdzd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.833322 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9482771-7584-4b92-a228-f20109e630a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://962da16cbe6057a5a85c142cf7cab59437c18a2967f3f825b9534772025f56cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1cd8a54880726a78ffa8b77ade1af4d3f8b0397fa12302cc2ea9f9f28219e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdb4dfe623da5122bb6d3a0d1d44cd3a37137f8782e2bc0e4c52f72a5ded57ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac1e3ce00298adf1e5344d8d729c03d861b55670dd4d85296eaa2cb6862886ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f01cfa0b371fd7cfcb05d47aeb9bd1833c3158b1813c93f265076103a4c395c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ae36b6bccb9ebed8600d5893bbbfba2aca6d67fbfd28a016f8c78b447f2a9f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1bc50f13093d3fb711869264ab78438055d95818d5b46f6699c79d1fc02a39a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e5d47389768b9ac9ae6aeadbe09816ecc0d50e0f4a278cc6385539a900916e66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.852822 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d3be416d-184d-47f9-846a-6304666886fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e1687d827c585c9ae611484cac0f637bd73ed0d343759c1bbd62b537a85e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abf95dbec9f8b7e1269f49de01148a688d63436859078d9a40fc8cce4b8a6130\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3c70ba10048ffe41e466ac6a9de6fae809482701c8f3243e737b068cf123a0f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd8ece9d79d9e42c69d2a272ffef185e3e675d08149927b982fb098788d2c4e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f548c3428937d5d62ed410c2f7b716aab8f731b61496aab4011abb2daa87257\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8bf4731825a65820bad0d130e5c7a240118247fbf5661ae619725d978f79262\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d127dd625ac63c65b136cdfc648f4daeb780bd8e72858eae5c9d6d2a4289c719\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkl7x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2x9m4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.865966 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa42ae65-5fda-421e-b27a-6d8a0b2defb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dd6fa9916973ff00b62732263ed89287498edd27f73915ea14769b826147f64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8trsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-m8b9c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.884862 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.885225 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.885292 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.885310 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"95b3dedb-4cc4-42e8-aa58-a754541eb717\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9ce0f506190dc7c2aeda4da82381d369a673b25ef49ae5953867421d661241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://009b6564c7bcb0a5eb9c6d071642d2c80fd57c55de65b633c6d1f1dd7276bc4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35083987cad344e88dfeac8a7ae2cff276e83083042902118de91ccbf70fde53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.885368 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.885562 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.900701 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f965562-76bd-4e1c-bc05-7483ad9e773d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T08:23:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 08:23:08.438238 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 08:23:08.439502 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3156693826/tls.crt::/tmp/serving-cert-3156693826/tls.key\\\\\\\"\\\\nI1211 08:23:14.213094 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 08:23:14.220345 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 08:23:14.220382 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 08:23:14.220418 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 08:23:14.220425 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 08:23:14.228468 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1211 08:23:14.228489 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1211 08:23:14.228504 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228513 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 08:23:14.228519 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 08:23:14.228524 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 08:23:14.228529 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 08:23:14.228533 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1211 08:23:14.230571 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:22:57Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:22:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:22:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:22:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.913580 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://611de9f1af681fb5c53dab4923b4c8c7e9f0b2b7738e1e6057a1b7c6a64840ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://953c5b9d5d9403bd6c5701fc19e59df54a2468147fc1856f41e885b24caef8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.925271 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ee88213b4991a1f1371806ac90f4119dd51501840985cc5c1c8653155061c57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.946271 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"216d94db-3002-48a3-b3c2-2a3201f4d6cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2766594a304aad2a0ec70e7ef20e190e9c8c5c2e4127b384c112b334c30e352a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:23:48Z\\\",\\\"message\\\":\\\"t-go/informers/factory.go:160\\\\nI1211 08:23:47.935838 6609 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.935907 6609 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:23:47.936165 6609 factory.go:656] Stopping watch factory\\\\nI1211 08:23:47.936197 6609 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.936243 6609 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:23:47.936656 6609 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:23:47.937055 6609 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 08:23:47.980537 6609 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:23:47.980572 6609 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:23:47.980681 6609 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:23:47.980744 6609 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:23:47.980947 6609 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:24:19Z\\\",\\\"message\\\":\\\"ub.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI1211 08:24:19.559394 7025 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:24:19.560085 7025 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 08:24:19.561892 7025 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 08:24:19.561942 7025 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 08:24:19.565971 7025 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 08:24:19.566008 7025 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 08:24:19.566133 7025 factory.go:656] Stopping watch factory\\\\nI1211 08:24:19.566160 7025 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 08:24:19.566168 7025 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 08:24:19.733100 7025 ovnkube.go:599] Stopped ovnkube\\\\nI1211 08:24:19.733208 7025 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 08:24:19.733299 7025 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:24:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6k9hp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fbd2b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.960415 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41015a59-be8f-40e9-9315-d4d0179897b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed36b8e934249ddb9d026b9fbac02b703610da477f0795ce02783839677a0f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b48652fd9795d657a75f90ff344884a5b5c9a424e6387e57efe8e0022d1a188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-svv4r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-q5bwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.981270 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.988589 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.988626 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.988669 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.988694 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:20 crc kubenswrapper[4992]: I1211 08:24:20.988715 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:20Z","lastTransitionTime":"2025-12-11T08:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.000496 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lglcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5838adfc-502f-44ac-be33-14f964497c4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:24:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T08:24:03Z\\\",\\\"message\\\":\\\"2025-12-11T08:23:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f\\\\n2025-12-11T08:23:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_21c1508f-8d03-479e-8e66-f5c5c6f9548f to /host/opt/cni/bin/\\\\n2025-12-11T08:23:18Z [verbose] multus-daemon started\\\\n2025-12-11T08:23:18Z [verbose] Readiness Indicator file check\\\\n2025-12-11T08:24:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T08:23:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:24:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9j64k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lglcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:20Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.015363 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6cjmj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"38d638dc-8df2-4332-9ffe-cb15ddbe91f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f375ea3889bdabebd8b9e7ae52822196220bb749733b9fad896899fa2f6359dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T08:23:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zj5v4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6cjmj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.029211 4992 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-j68fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b67a6a3-6d97-4b58-96d9-f0909df30802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T08:23:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbs6x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T08:23:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-j68fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T08:24:21Z is after 2025-08-24T17:21:41Z" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.091894 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.091932 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.091947 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.091967 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.091978 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:21Z","lastTransitionTime":"2025-12-11T08:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.094225 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.094225 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:21 crc kubenswrapper[4992]: E1211 08:24:21.094353 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:21 crc kubenswrapper[4992]: E1211 08:24:21.094742 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.194789 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.195227 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.195333 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.195434 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.195535 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:21Z","lastTransitionTime":"2025-12-11T08:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.299119 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.299203 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.299228 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.299258 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.299281 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:21Z","lastTransitionTime":"2025-12-11T08:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.402829 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.402909 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.402930 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.402953 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.402970 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:21Z","lastTransitionTime":"2025-12-11T08:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.505500 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.505562 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.505583 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.505612 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.505675 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:21Z","lastTransitionTime":"2025-12-11T08:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.608438 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.608486 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.608501 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.608522 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.608541 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:21Z","lastTransitionTime":"2025-12-11T08:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.710991 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.711032 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.711041 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.711054 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.711069 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:21Z","lastTransitionTime":"2025-12-11T08:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.738037 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/3.log" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.813940 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.814005 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.814027 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.814055 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.814079 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:21Z","lastTransitionTime":"2025-12-11T08:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.916267 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.916300 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.916308 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.916322 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:21 crc kubenswrapper[4992]: I1211 08:24:21.916330 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:21Z","lastTransitionTime":"2025-12-11T08:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.019515 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.019571 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.019594 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.019623 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.019665 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.094896 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.094974 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:22 crc kubenswrapper[4992]: E1211 08:24:22.095072 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:22 crc kubenswrapper[4992]: E1211 08:24:22.095387 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.121880 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.121930 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.121940 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.121958 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.121973 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.224098 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.224158 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.224174 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.224199 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.224218 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.327343 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.327391 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.327400 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.327415 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.327424 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.429913 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.429969 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.429982 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.430001 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.430016 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.533359 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.533408 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.533418 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.533432 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.533443 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.637197 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.637234 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.637245 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.637261 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.637271 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.740384 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.740441 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.740450 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.740464 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.740474 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.843647 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.843703 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.843716 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.843735 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.843748 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.947537 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.947617 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.947685 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.947723 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:22 crc kubenswrapper[4992]: I1211 08:24:22.947748 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:22Z","lastTransitionTime":"2025-12-11T08:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.050870 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.050912 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.050925 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.050941 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.050955 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.094752 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.094788 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:23 crc kubenswrapper[4992]: E1211 08:24:23.095017 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:23 crc kubenswrapper[4992]: E1211 08:24:23.095153 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.153683 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.153724 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.153740 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.153757 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.153768 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.258795 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.258879 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.258901 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.258932 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.258955 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.361848 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.361918 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.361933 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.361955 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.361967 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.465538 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.465591 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.465608 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.465654 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.465673 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.569202 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.569272 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.569287 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.569310 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.569324 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.671938 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.671988 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.671999 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.672020 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.672030 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.774684 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.774744 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.774754 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.774774 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.774785 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.877440 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.877500 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.877512 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.877534 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.877549 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.980369 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.980448 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.980462 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.980513 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:23 crc kubenswrapper[4992]: I1211 08:24:23.980534 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:23Z","lastTransitionTime":"2025-12-11T08:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.083682 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.083778 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.083805 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.083826 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.083840 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:24Z","lastTransitionTime":"2025-12-11T08:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.094479 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.094532 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:24 crc kubenswrapper[4992]: E1211 08:24:24.094744 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:24 crc kubenswrapper[4992]: E1211 08:24:24.094868 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.187089 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.187148 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.187157 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.187184 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.187194 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:24Z","lastTransitionTime":"2025-12-11T08:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.291165 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.291237 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.291254 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.291284 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.291302 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:24Z","lastTransitionTime":"2025-12-11T08:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.394020 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.394089 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.394106 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.394130 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.394161 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:24Z","lastTransitionTime":"2025-12-11T08:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.497473 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.497535 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.497548 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.497566 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.497578 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:24Z","lastTransitionTime":"2025-12-11T08:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.600572 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.600676 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.600695 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.600718 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.600736 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:24Z","lastTransitionTime":"2025-12-11T08:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.705344 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.705412 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.705432 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.705458 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.705477 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:24Z","lastTransitionTime":"2025-12-11T08:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.809045 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.809115 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.809133 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.809161 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.809179 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:24Z","lastTransitionTime":"2025-12-11T08:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.911591 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.911671 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.911690 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.911713 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:24 crc kubenswrapper[4992]: I1211 08:24:24.911733 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:24Z","lastTransitionTime":"2025-12-11T08:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.015038 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.015136 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.015170 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.015200 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.015223 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.094836 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.094875 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:25 crc kubenswrapper[4992]: E1211 08:24:25.095020 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:25 crc kubenswrapper[4992]: E1211 08:24:25.095529 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.111793 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.117917 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.117990 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.118006 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.118030 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.118046 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.221507 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.221558 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.221569 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.221588 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.221603 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.324159 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.324232 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.324253 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.324281 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.324300 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.427332 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.427424 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.427448 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.427485 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.427509 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.530609 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.530965 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.530977 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.530994 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.531007 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.634518 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.634593 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.634619 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.634683 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.634708 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.738185 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.738235 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.738250 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.738268 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.738282 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.841785 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.841841 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.841856 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.841876 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.841886 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.945417 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.945450 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.945459 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.945473 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:25 crc kubenswrapper[4992]: I1211 08:24:25.945482 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:25Z","lastTransitionTime":"2025-12-11T08:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.048510 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.048582 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.048603 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.048666 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.048687 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.095103 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.095202 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:26 crc kubenswrapper[4992]: E1211 08:24:26.095330 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:26 crc kubenswrapper[4992]: E1211 08:24:26.095675 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.151260 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.151306 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.151321 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.151345 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.151358 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.167607 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.167574722 podStartE2EDuration="1m11.167574722s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.144572142 +0000 UTC m=+90.404046088" watchObservedRunningTime="2025-12-11 08:24:26.167574722 +0000 UTC m=+90.427048658" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.193302 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2x9m4" podStartSLOduration=72.193236446 podStartE2EDuration="1m12.193236446s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.167415369 +0000 UTC m=+90.426889305" watchObservedRunningTime="2025-12-11 08:24:26.193236446 +0000 UTC m=+90.452710372" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.209720 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podStartSLOduration=72.209698933 podStartE2EDuration="1m12.209698933s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.19299674 +0000 UTC m=+90.452470666" watchObservedRunningTime="2025-12-11 08:24:26.209698933 +0000 UTC m=+90.469172859" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.210156 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-q5bwr" podStartSLOduration=71.210151974 podStartE2EDuration="1m11.210151974s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.209337865 +0000 UTC m=+90.468811791" watchObservedRunningTime="2025-12-11 08:24:26.210151974 +0000 UTC m=+90.469625900" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.227390 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.227364159 podStartE2EDuration="1m12.227364159s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.227299927 +0000 UTC m=+90.486773863" watchObservedRunningTime="2025-12-11 08:24:26.227364159 +0000 UTC m=+90.486838085" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.245661 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.245623728 podStartE2EDuration="1m11.245623728s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.245152057 +0000 UTC m=+90.504625983" watchObservedRunningTime="2025-12-11 08:24:26.245623728 +0000 UTC m=+90.505097654" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.259280 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.259335 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.259348 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.259370 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.259384 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.334174 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lglcz" podStartSLOduration=72.33414115 podStartE2EDuration="1m12.33414115s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.331851027 +0000 UTC m=+90.591324963" watchObservedRunningTime="2025-12-11 08:24:26.33414115 +0000 UTC m=+90.593615096" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.362455 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6cjmj" podStartSLOduration=72.362433535 podStartE2EDuration="1m12.362433535s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.350207658 +0000 UTC m=+90.609681584" watchObservedRunningTime="2025-12-11 08:24:26.362433535 +0000 UTC m=+90.621907471" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.363820 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.363854 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.363866 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.363883 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.363896 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.396053 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.396029715 podStartE2EDuration="44.396029715s" podCreationTimestamp="2025-12-11 08:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.39411336 +0000 UTC m=+90.653587296" watchObservedRunningTime="2025-12-11 08:24:26.396029715 +0000 UTC m=+90.655503641" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.396177 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bjdzd" podStartSLOduration=72.396168999 podStartE2EDuration="1m12.396168999s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.379122018 +0000 UTC m=+90.638595944" watchObservedRunningTime="2025-12-11 08:24:26.396168999 +0000 UTC m=+90.655642925" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.404457 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=1.404438403 podStartE2EDuration="1.404438403s" podCreationTimestamp="2025-12-11 08:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:26.403943942 +0000 UTC m=+90.663417868" watchObservedRunningTime="2025-12-11 08:24:26.404438403 +0000 UTC m=+90.663912329" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.465950 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.465983 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.465991 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.466007 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.466019 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.569075 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.569122 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.569135 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.569158 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.569170 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.672123 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.672179 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.672195 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.672218 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.672230 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.775709 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.775785 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.775800 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.775823 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.775836 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.879731 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.879801 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.879812 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.879838 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.879854 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.985626 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.985740 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.985761 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.985792 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:26 crc kubenswrapper[4992]: I1211 08:24:26.985819 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:26Z","lastTransitionTime":"2025-12-11T08:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.088618 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.088684 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.088695 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.088711 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.088722 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:27Z","lastTransitionTime":"2025-12-11T08:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.094339 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.094387 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:27 crc kubenswrapper[4992]: E1211 08:24:27.094554 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:27 crc kubenswrapper[4992]: E1211 08:24:27.094696 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.193138 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.193209 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.193228 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.193256 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.193273 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:27Z","lastTransitionTime":"2025-12-11T08:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.295580 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.295982 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.296100 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.296336 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.296457 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:27Z","lastTransitionTime":"2025-12-11T08:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.399514 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.399549 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.399579 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.399595 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.399605 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:27Z","lastTransitionTime":"2025-12-11T08:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.502766 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.503441 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.503583 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.503710 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.503804 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:27Z","lastTransitionTime":"2025-12-11T08:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.606455 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.606504 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.606514 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.606530 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.606541 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:27Z","lastTransitionTime":"2025-12-11T08:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.708848 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.709209 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.709277 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.709357 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.709431 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:27Z","lastTransitionTime":"2025-12-11T08:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.812602 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.812676 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.812688 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.812715 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.812730 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:27Z","lastTransitionTime":"2025-12-11T08:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.916708 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.916774 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.916788 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.916811 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:27 crc kubenswrapper[4992]: I1211 08:24:27.916825 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:27Z","lastTransitionTime":"2025-12-11T08:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.019399 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.019763 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.019838 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.019927 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.019995 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.094563 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.094900 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:28 crc kubenswrapper[4992]: E1211 08:24:28.095242 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:28 crc kubenswrapper[4992]: E1211 08:24:28.095280 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.123257 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.123296 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.123309 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.123330 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.123343 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.226545 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.226589 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.226598 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.226616 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.226625 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.329413 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.329455 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.329466 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.329482 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.329492 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.433602 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.433672 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.433684 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.433702 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.433714 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.537549 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.538995 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.539239 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.539400 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.539554 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.642924 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.643358 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.643458 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.643593 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.643720 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.746779 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.747267 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.747435 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.747632 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.747773 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.850892 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.850939 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.850948 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.850962 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.850971 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.954214 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.954264 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.954278 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.954297 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:28 crc kubenswrapper[4992]: I1211 08:24:28.954309 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:28Z","lastTransitionTime":"2025-12-11T08:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.058060 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.058102 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.058112 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.058128 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.058138 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:29Z","lastTransitionTime":"2025-12-11T08:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.094337 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:29 crc kubenswrapper[4992]: E1211 08:24:29.094498 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.094362 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:29 crc kubenswrapper[4992]: E1211 08:24:29.094774 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.160689 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.161027 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.161121 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.161202 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.161265 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:29Z","lastTransitionTime":"2025-12-11T08:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.263538 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.263576 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.263587 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.263601 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.263611 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:29Z","lastTransitionTime":"2025-12-11T08:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.366043 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.366399 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.366491 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.366576 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.366676 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:29Z","lastTransitionTime":"2025-12-11T08:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.464841 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.464893 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.464911 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.464936 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.464953 4992 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T08:24:29Z","lastTransitionTime":"2025-12-11T08:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.529100 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx"] Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.529473 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.531438 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.533694 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.534352 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.536079 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.560865 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aae60317-ee54-48eb-9819-00a0c3524067-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.560911 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aae60317-ee54-48eb-9819-00a0c3524067-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.560934 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aae60317-ee54-48eb-9819-00a0c3524067-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.560951 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae60317-ee54-48eb-9819-00a0c3524067-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.561000 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aae60317-ee54-48eb-9819-00a0c3524067-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.662229 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aae60317-ee54-48eb-9819-00a0c3524067-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.662283 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aae60317-ee54-48eb-9819-00a0c3524067-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.662304 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aae60317-ee54-48eb-9819-00a0c3524067-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.662323 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae60317-ee54-48eb-9819-00a0c3524067-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.662339 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aae60317-ee54-48eb-9819-00a0c3524067-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.662380 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aae60317-ee54-48eb-9819-00a0c3524067-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.662423 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aae60317-ee54-48eb-9819-00a0c3524067-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.663252 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aae60317-ee54-48eb-9819-00a0c3524067-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.669392 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aae60317-ee54-48eb-9819-00a0c3524067-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.678798 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aae60317-ee54-48eb-9819-00a0c3524067-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rcrnx\" (UID: \"aae60317-ee54-48eb-9819-00a0c3524067\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:29 crc kubenswrapper[4992]: I1211 08:24:29.849838 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" Dec 11 08:24:30 crc kubenswrapper[4992]: I1211 08:24:30.094813 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:30 crc kubenswrapper[4992]: I1211 08:24:30.094897 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:30 crc kubenswrapper[4992]: E1211 08:24:30.095010 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:30 crc kubenswrapper[4992]: E1211 08:24:30.095185 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:30 crc kubenswrapper[4992]: I1211 08:24:30.771675 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" event={"ID":"aae60317-ee54-48eb-9819-00a0c3524067","Type":"ContainerStarted","Data":"9615118a93d00af8dab69fadc9ba3600cd9fa06fa3ba0327a7ab801ba9a6fb86"} Dec 11 08:24:30 crc kubenswrapper[4992]: I1211 08:24:30.771754 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" event={"ID":"aae60317-ee54-48eb-9819-00a0c3524067","Type":"ContainerStarted","Data":"eec73592982818ba33556955ba529cca1780de73a8f4d3a78a15665e772e4b26"} Dec 11 08:24:31 crc kubenswrapper[4992]: I1211 08:24:31.094958 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:31 crc kubenswrapper[4992]: I1211 08:24:31.094978 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:31 crc kubenswrapper[4992]: E1211 08:24:31.095134 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:31 crc kubenswrapper[4992]: E1211 08:24:31.095340 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:32 crc kubenswrapper[4992]: I1211 08:24:32.094767 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:32 crc kubenswrapper[4992]: I1211 08:24:32.094823 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:32 crc kubenswrapper[4992]: E1211 08:24:32.095110 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:32 crc kubenswrapper[4992]: E1211 08:24:32.095233 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:33 crc kubenswrapper[4992]: I1211 08:24:33.002355 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:33 crc kubenswrapper[4992]: E1211 08:24:33.002684 4992 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:24:33 crc kubenswrapper[4992]: E1211 08:24:33.002817 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs podName:1b67a6a3-6d97-4b58-96d9-f0909df30802 nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.002786649 +0000 UTC m=+161.262260745 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs") pod "network-metrics-daemon-j68fr" (UID: "1b67a6a3-6d97-4b58-96d9-f0909df30802") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 08:24:33 crc kubenswrapper[4992]: I1211 08:24:33.094829 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:33 crc kubenswrapper[4992]: I1211 08:24:33.094858 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:33 crc kubenswrapper[4992]: E1211 08:24:33.095009 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:33 crc kubenswrapper[4992]: E1211 08:24:33.095420 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:33 crc kubenswrapper[4992]: I1211 08:24:33.095829 4992 scope.go:117] "RemoveContainer" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:24:33 crc kubenswrapper[4992]: E1211 08:24:33.096044 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" Dec 11 08:24:33 crc kubenswrapper[4992]: I1211 08:24:33.129162 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rcrnx" podStartSLOduration=79.129133931 podStartE2EDuration="1m19.129133931s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:24:30.792436224 +0000 UTC m=+95.051910160" watchObservedRunningTime="2025-12-11 08:24:33.129133931 +0000 UTC m=+97.388607857" Dec 11 08:24:34 crc kubenswrapper[4992]: I1211 08:24:34.095009 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:34 crc kubenswrapper[4992]: I1211 08:24:34.095085 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:34 crc kubenswrapper[4992]: E1211 08:24:34.095175 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:34 crc kubenswrapper[4992]: E1211 08:24:34.095270 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:35 crc kubenswrapper[4992]: I1211 08:24:35.094014 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:35 crc kubenswrapper[4992]: I1211 08:24:35.094029 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:35 crc kubenswrapper[4992]: E1211 08:24:35.094244 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:35 crc kubenswrapper[4992]: E1211 08:24:35.094407 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:36 crc kubenswrapper[4992]: I1211 08:24:36.094492 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:36 crc kubenswrapper[4992]: I1211 08:24:36.094544 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:36 crc kubenswrapper[4992]: E1211 08:24:36.096222 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:36 crc kubenswrapper[4992]: E1211 08:24:36.096320 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:37 crc kubenswrapper[4992]: I1211 08:24:37.094299 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:37 crc kubenswrapper[4992]: E1211 08:24:37.094569 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:37 crc kubenswrapper[4992]: I1211 08:24:37.095063 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:37 crc kubenswrapper[4992]: E1211 08:24:37.095325 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:38 crc kubenswrapper[4992]: I1211 08:24:38.094450 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:38 crc kubenswrapper[4992]: I1211 08:24:38.094540 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:38 crc kubenswrapper[4992]: E1211 08:24:38.094592 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:38 crc kubenswrapper[4992]: E1211 08:24:38.094807 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:39 crc kubenswrapper[4992]: I1211 08:24:39.094924 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:39 crc kubenswrapper[4992]: I1211 08:24:39.095054 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:39 crc kubenswrapper[4992]: E1211 08:24:39.095137 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:39 crc kubenswrapper[4992]: E1211 08:24:39.095248 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:40 crc kubenswrapper[4992]: I1211 08:24:40.094684 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:40 crc kubenswrapper[4992]: E1211 08:24:40.094863 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:40 crc kubenswrapper[4992]: I1211 08:24:40.094656 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:40 crc kubenswrapper[4992]: E1211 08:24:40.095231 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:41 crc kubenswrapper[4992]: I1211 08:24:41.094279 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:41 crc kubenswrapper[4992]: I1211 08:24:41.094339 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:41 crc kubenswrapper[4992]: E1211 08:24:41.094482 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:41 crc kubenswrapper[4992]: E1211 08:24:41.094600 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:42 crc kubenswrapper[4992]: I1211 08:24:42.094179 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:42 crc kubenswrapper[4992]: E1211 08:24:42.094389 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:42 crc kubenswrapper[4992]: I1211 08:24:42.094751 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:42 crc kubenswrapper[4992]: E1211 08:24:42.095261 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:43 crc kubenswrapper[4992]: I1211 08:24:43.095048 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:43 crc kubenswrapper[4992]: E1211 08:24:43.095237 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:43 crc kubenswrapper[4992]: I1211 08:24:43.095096 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:43 crc kubenswrapper[4992]: E1211 08:24:43.095979 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:44 crc kubenswrapper[4992]: I1211 08:24:44.094827 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:44 crc kubenswrapper[4992]: I1211 08:24:44.094863 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:44 crc kubenswrapper[4992]: E1211 08:24:44.094992 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:44 crc kubenswrapper[4992]: E1211 08:24:44.095096 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:45 crc kubenswrapper[4992]: I1211 08:24:45.094555 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:45 crc kubenswrapper[4992]: I1211 08:24:45.094594 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:45 crc kubenswrapper[4992]: E1211 08:24:45.094820 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:45 crc kubenswrapper[4992]: E1211 08:24:45.094918 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:46 crc kubenswrapper[4992]: I1211 08:24:46.094064 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:46 crc kubenswrapper[4992]: I1211 08:24:46.094182 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:46 crc kubenswrapper[4992]: E1211 08:24:46.101572 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:46 crc kubenswrapper[4992]: E1211 08:24:46.101936 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:47 crc kubenswrapper[4992]: I1211 08:24:47.093995 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:47 crc kubenswrapper[4992]: E1211 08:24:47.094180 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:47 crc kubenswrapper[4992]: I1211 08:24:47.094890 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:47 crc kubenswrapper[4992]: E1211 08:24:47.095210 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:48 crc kubenswrapper[4992]: I1211 08:24:48.094291 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:48 crc kubenswrapper[4992]: E1211 08:24:48.094674 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:48 crc kubenswrapper[4992]: I1211 08:24:48.094819 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:48 crc kubenswrapper[4992]: E1211 08:24:48.095011 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:48 crc kubenswrapper[4992]: I1211 08:24:48.097150 4992 scope.go:117] "RemoveContainer" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:24:48 crc kubenswrapper[4992]: E1211 08:24:48.097625 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fbd2b_openshift-ovn-kubernetes(216d94db-3002-48a3-b3c2-2a3201f4d6cd)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" Dec 11 08:24:49 crc kubenswrapper[4992]: I1211 08:24:49.094551 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:49 crc kubenswrapper[4992]: I1211 08:24:49.094626 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:49 crc kubenswrapper[4992]: E1211 08:24:49.095056 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:49 crc kubenswrapper[4992]: E1211 08:24:49.095241 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:50 crc kubenswrapper[4992]: I1211 08:24:50.094733 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:50 crc kubenswrapper[4992]: I1211 08:24:50.094826 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:50 crc kubenswrapper[4992]: E1211 08:24:50.095830 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:50 crc kubenswrapper[4992]: E1211 08:24:50.095865 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:50 crc kubenswrapper[4992]: I1211 08:24:50.845543 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/1.log" Dec 11 08:24:50 crc kubenswrapper[4992]: I1211 08:24:50.846349 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/0.log" Dec 11 08:24:50 crc kubenswrapper[4992]: I1211 08:24:50.846425 4992 generic.go:334] "Generic (PLEG): container finished" podID="5838adfc-502f-44ac-be33-14f964497c4f" containerID="04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329" exitCode=1 Dec 11 08:24:50 crc kubenswrapper[4992]: I1211 08:24:50.846471 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lglcz" event={"ID":"5838adfc-502f-44ac-be33-14f964497c4f","Type":"ContainerDied","Data":"04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329"} Dec 11 08:24:50 crc kubenswrapper[4992]: I1211 08:24:50.846529 4992 scope.go:117] "RemoveContainer" containerID="59bc0f0eb343df1fddfbc7757c5224c14e59d458f3a2dec0008b993841f68294" Dec 11 08:24:50 crc kubenswrapper[4992]: I1211 08:24:50.847439 4992 scope.go:117] "RemoveContainer" containerID="04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329" Dec 11 08:24:50 crc kubenswrapper[4992]: E1211 08:24:50.847863 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lglcz_openshift-multus(5838adfc-502f-44ac-be33-14f964497c4f)\"" pod="openshift-multus/multus-lglcz" podUID="5838adfc-502f-44ac-be33-14f964497c4f" Dec 11 08:24:51 crc kubenswrapper[4992]: I1211 08:24:51.094181 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:51 crc kubenswrapper[4992]: I1211 08:24:51.094212 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:51 crc kubenswrapper[4992]: E1211 08:24:51.094394 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:51 crc kubenswrapper[4992]: E1211 08:24:51.094585 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:51 crc kubenswrapper[4992]: I1211 08:24:51.852217 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/1.log" Dec 11 08:24:52 crc kubenswrapper[4992]: I1211 08:24:52.094446 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:52 crc kubenswrapper[4992]: I1211 08:24:52.094498 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:52 crc kubenswrapper[4992]: E1211 08:24:52.095225 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:52 crc kubenswrapper[4992]: E1211 08:24:52.095303 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:53 crc kubenswrapper[4992]: I1211 08:24:53.094360 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:53 crc kubenswrapper[4992]: E1211 08:24:53.094557 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:53 crc kubenswrapper[4992]: I1211 08:24:53.094377 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:53 crc kubenswrapper[4992]: E1211 08:24:53.094869 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:54 crc kubenswrapper[4992]: I1211 08:24:54.094458 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:54 crc kubenswrapper[4992]: E1211 08:24:54.095183 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:54 crc kubenswrapper[4992]: I1211 08:24:54.094460 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:54 crc kubenswrapper[4992]: E1211 08:24:54.095888 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:55 crc kubenswrapper[4992]: I1211 08:24:55.094287 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:55 crc kubenswrapper[4992]: E1211 08:24:55.094495 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:55 crc kubenswrapper[4992]: I1211 08:24:55.094287 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:55 crc kubenswrapper[4992]: E1211 08:24:55.094613 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:56 crc kubenswrapper[4992]: E1211 08:24:56.093489 4992 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 11 08:24:56 crc kubenswrapper[4992]: I1211 08:24:56.094776 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:56 crc kubenswrapper[4992]: E1211 08:24:56.096826 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:56 crc kubenswrapper[4992]: I1211 08:24:56.096886 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:56 crc kubenswrapper[4992]: E1211 08:24:56.096975 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:56 crc kubenswrapper[4992]: E1211 08:24:56.185141 4992 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 08:24:57 crc kubenswrapper[4992]: I1211 08:24:57.095007 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:57 crc kubenswrapper[4992]: E1211 08:24:57.095721 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:24:57 crc kubenswrapper[4992]: I1211 08:24:57.095015 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:57 crc kubenswrapper[4992]: E1211 08:24:57.096252 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:58 crc kubenswrapper[4992]: I1211 08:24:58.094676 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:24:58 crc kubenswrapper[4992]: I1211 08:24:58.094745 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:24:58 crc kubenswrapper[4992]: E1211 08:24:58.095841 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:24:58 crc kubenswrapper[4992]: E1211 08:24:58.096315 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:24:59 crc kubenswrapper[4992]: I1211 08:24:59.094254 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:24:59 crc kubenswrapper[4992]: E1211 08:24:59.094464 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:24:59 crc kubenswrapper[4992]: I1211 08:24:59.094554 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:24:59 crc kubenswrapper[4992]: E1211 08:24:59.095095 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:25:00 crc kubenswrapper[4992]: I1211 08:25:00.095107 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:00 crc kubenswrapper[4992]: I1211 08:25:00.095231 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:00 crc kubenswrapper[4992]: E1211 08:25:00.095351 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:25:00 crc kubenswrapper[4992]: E1211 08:25:00.095533 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:25:01 crc kubenswrapper[4992]: I1211 08:25:01.094528 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:01 crc kubenswrapper[4992]: I1211 08:25:01.094569 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:01 crc kubenswrapper[4992]: E1211 08:25:01.094776 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:25:01 crc kubenswrapper[4992]: E1211 08:25:01.094932 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:25:01 crc kubenswrapper[4992]: E1211 08:25:01.187152 4992 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 08:25:02 crc kubenswrapper[4992]: I1211 08:25:02.102031 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:02 crc kubenswrapper[4992]: I1211 08:25:02.102136 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:02 crc kubenswrapper[4992]: E1211 08:25:02.102969 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:25:02 crc kubenswrapper[4992]: E1211 08:25:02.103518 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:25:03 crc kubenswrapper[4992]: I1211 08:25:03.094426 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:03 crc kubenswrapper[4992]: I1211 08:25:03.094493 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:03 crc kubenswrapper[4992]: E1211 08:25:03.094996 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:25:03 crc kubenswrapper[4992]: E1211 08:25:03.095194 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:25:03 crc kubenswrapper[4992]: I1211 08:25:03.095338 4992 scope.go:117] "RemoveContainer" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:25:04 crc kubenswrapper[4992]: I1211 08:25:04.061971 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/3.log" Dec 11 08:25:04 crc kubenswrapper[4992]: I1211 08:25:04.074144 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerStarted","Data":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} Dec 11 08:25:04 crc kubenswrapper[4992]: I1211 08:25:04.074690 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:25:04 crc kubenswrapper[4992]: I1211 08:25:04.094363 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:04 crc kubenswrapper[4992]: I1211 08:25:04.094446 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:04 crc kubenswrapper[4992]: E1211 08:25:04.094541 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:25:04 crc kubenswrapper[4992]: E1211 08:25:04.094619 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:25:04 crc kubenswrapper[4992]: I1211 08:25:04.863348 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podStartSLOduration=110.863305333 podStartE2EDuration="1m50.863305333s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:04.132315091 +0000 UTC m=+128.391789007" watchObservedRunningTime="2025-12-11 08:25:04.863305333 +0000 UTC m=+129.122779269" Dec 11 08:25:04 crc kubenswrapper[4992]: I1211 08:25:04.863983 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j68fr"] Dec 11 08:25:04 crc kubenswrapper[4992]: I1211 08:25:04.864148 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:04 crc kubenswrapper[4992]: E1211 08:25:04.864273 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:25:05 crc kubenswrapper[4992]: I1211 08:25:05.094342 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:05 crc kubenswrapper[4992]: E1211 08:25:05.094699 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:25:05 crc kubenswrapper[4992]: I1211 08:25:05.094717 4992 scope.go:117] "RemoveContainer" containerID="04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329" Dec 11 08:25:06 crc kubenswrapper[4992]: I1211 08:25:06.083534 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/1.log" Dec 11 08:25:06 crc kubenswrapper[4992]: I1211 08:25:06.084014 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lglcz" event={"ID":"5838adfc-502f-44ac-be33-14f964497c4f","Type":"ContainerStarted","Data":"88cd5d23fc1cf16747d24d55152252142e221cc14a1a4fb4bb157a484b76bd2c"} Dec 11 08:25:06 crc kubenswrapper[4992]: I1211 08:25:06.094703 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:06 crc kubenswrapper[4992]: I1211 08:25:06.094716 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:06 crc kubenswrapper[4992]: E1211 08:25:06.094836 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:25:06 crc kubenswrapper[4992]: E1211 08:25:06.095139 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:25:06 crc kubenswrapper[4992]: E1211 08:25:06.188034 4992 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 08:25:07 crc kubenswrapper[4992]: I1211 08:25:07.094424 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:07 crc kubenswrapper[4992]: E1211 08:25:07.094690 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:25:07 crc kubenswrapper[4992]: I1211 08:25:07.094971 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:07 crc kubenswrapper[4992]: E1211 08:25:07.105015 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:25:08 crc kubenswrapper[4992]: I1211 08:25:08.095890 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:08 crc kubenswrapper[4992]: I1211 08:25:08.095952 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:08 crc kubenswrapper[4992]: E1211 08:25:08.096135 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:25:08 crc kubenswrapper[4992]: E1211 08:25:08.096329 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:25:09 crc kubenswrapper[4992]: I1211 08:25:09.094358 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:09 crc kubenswrapper[4992]: I1211 08:25:09.094440 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:09 crc kubenswrapper[4992]: E1211 08:25:09.094577 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:25:09 crc kubenswrapper[4992]: E1211 08:25:09.094714 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:25:10 crc kubenswrapper[4992]: I1211 08:25:10.095084 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:10 crc kubenswrapper[4992]: E1211 08:25:10.095245 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 08:25:10 crc kubenswrapper[4992]: I1211 08:25:10.095093 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:10 crc kubenswrapper[4992]: E1211 08:25:10.095355 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 08:25:11 crc kubenswrapper[4992]: I1211 08:25:11.094187 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:11 crc kubenswrapper[4992]: E1211 08:25:11.094311 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 08:25:11 crc kubenswrapper[4992]: I1211 08:25:11.094187 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:11 crc kubenswrapper[4992]: E1211 08:25:11.094385 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-j68fr" podUID="1b67a6a3-6d97-4b58-96d9-f0909df30802" Dec 11 08:25:12 crc kubenswrapper[4992]: I1211 08:25:12.094832 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:12 crc kubenswrapper[4992]: I1211 08:25:12.094943 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:12 crc kubenswrapper[4992]: I1211 08:25:12.098308 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 08:25:12 crc kubenswrapper[4992]: I1211 08:25:12.099413 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 08:25:13 crc kubenswrapper[4992]: I1211 08:25:13.095041 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:13 crc kubenswrapper[4992]: I1211 08:25:13.095046 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:13 crc kubenswrapper[4992]: I1211 08:25:13.097886 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 08:25:13 crc kubenswrapper[4992]: I1211 08:25:13.097997 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 08:25:13 crc kubenswrapper[4992]: I1211 08:25:13.098488 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 08:25:13 crc kubenswrapper[4992]: I1211 08:25:13.098853 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.602574 4992 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.642364 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k7jwn"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.643427 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.644844 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-46cbx"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.645292 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.650447 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.651062 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.651481 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.652288 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g5d6r"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.652467 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.653009 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.654152 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.654856 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.655980 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.657181 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.657247 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.657417 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.657531 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.657551 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.658349 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.688051 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.688236 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.690026 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.690944 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.691350 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.694019 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.695384 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.699179 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.703713 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.704290 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.704712 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-q5n2b"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.704742 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.705097 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.705359 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-h6k7l"] Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.705687 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:20 crc kubenswrapper[4992]: I1211 08:25:20.705728 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h6k7l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.352383 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.352815 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.354295 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.354679 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.355508 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.355911 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.357370 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.367231 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dsnmh"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.369198 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9qd6n"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.369710 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.375053 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.378676 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.400016 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.400550 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.403392 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.410148 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.408853 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.410689 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.403484 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.403578 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.404015 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.404600 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.405409 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.405768 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.406199 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.406329 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.406376 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.406505 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.406556 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.406683 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.406715 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.406845 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.406918 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.407001 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.407043 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.407207 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.407206 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.407356 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.407758 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.408019 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.408080 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.408241 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.408312 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.408347 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.408702 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.408806 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.408989 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.409126 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.409817 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.415882 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.417949 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.418173 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.418320 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.418516 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.419087 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.418168 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.421702 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.422242 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.429679 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zphhl"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.441462 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.442327 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443240 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443398 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443518 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443592 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443522 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443603 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443693 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443540 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443250 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443977 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443980 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.443926 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.444121 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.444132 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.444159 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.444242 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.444298 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.444430 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.445229 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.445725 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gwnxp"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.445941 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.447017 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9lslc"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.447551 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.447663 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.449206 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.451464 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.452262 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.452556 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g5d6r"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.452688 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.452860 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.452929 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9lslc"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.452989 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gwnxp"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.453110 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-q5n2b"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.453172 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.452398 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.457753 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-46cbx"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.457828 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dsnmh"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.458794 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-audit-policies\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.458858 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5bba427-d8bb-4c0f-a609-4c9c556056e0-config\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.458881 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-etcd-serving-ca\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.458897 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eea4eb1b-17a9-468f-981b-b26d90c75221-audit-dir\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.458914 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/901e3706-7b64-427f-a052-b4fc84b8304e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g5jg9\" (UID: \"901e3706-7b64-427f-a052-b4fc84b8304e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.458938 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e0ade2-2381-44e7-ad73-4bda0f48231c-config\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.458954 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe3eedb0-f613-4104-ba42-a22301757402-audit-dir\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.458970 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459004 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e3706-7b64-427f-a052-b4fc84b8304e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g5jg9\" (UID: \"901e3706-7b64-427f-a052-b4fc84b8304e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459023 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459050 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-audit\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459066 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-image-import-ca\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459096 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459111 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09e0ade2-2381-44e7-ad73-4bda0f48231c-auth-proxy-config\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459127 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48288264-5766-4d38-956b-68434cf4c955-trusted-ca\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459145 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xx49\" (UniqueName: \"kubernetes.io/projected/c781345d-eebb-4b54-98f5-2a51cdb942a0-kube-api-access-2xx49\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459166 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcpp4\" (UniqueName: \"kubernetes.io/projected/aa46ae91-4a07-4663-82c7-ba8c6fac622e-kube-api-access-lcpp4\") pod \"dns-operator-744455d44c-dsnmh\" (UID: \"aa46ae91-4a07-4663-82c7-ba8c6fac622e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459184 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-config\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459229 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c781345d-eebb-4b54-98f5-2a51cdb942a0-audit-policies\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459247 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eea4eb1b-17a9-468f-981b-b26d90c75221-node-pullsecrets\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459262 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459284 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c6dbb7-21d9-44dd-bf69-752289a02ca4-serving-cert\") pod \"openshift-config-operator-7777fb866f-j4vw2\" (UID: \"06c6dbb7-21d9-44dd-bf69-752289a02ca4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459300 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa46ae91-4a07-4663-82c7-ba8c6fac622e-metrics-tls\") pod \"dns-operator-744455d44c-dsnmh\" (UID: \"aa46ae91-4a07-4663-82c7-ba8c6fac622e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459315 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459331 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-serving-cert\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459348 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdpwq\" (UniqueName: \"kubernetes.io/projected/83b64ec0-5648-49b2-9e7e-32834c30e7a9-kube-api-access-tdpwq\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459364 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-service-ca\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459379 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-serving-cert\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459414 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459431 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b64ec0-5648-49b2-9e7e-32834c30e7a9-serving-cert\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459447 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5t9g\" (UniqueName: \"kubernetes.io/projected/09e0ade2-2381-44e7-ad73-4bda0f48231c-kube-api-access-m5t9g\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459465 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f77b180-f28c-472b-a577-44ef5012100c-config\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459488 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-config\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459503 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmzk\" (UniqueName: \"kubernetes.io/projected/4fc4ca1b-b60d-484e-807e-034d452122f2-kube-api-access-rgmzk\") pod \"cluster-samples-operator-665b6dd947-n4w6w\" (UID: \"4fc4ca1b-b60d-484e-807e-034d452122f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459537 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c781345d-eebb-4b54-98f5-2a51cdb942a0-etcd-client\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459557 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c781345d-eebb-4b54-98f5-2a51cdb942a0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459573 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-client-ca\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459586 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459602 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-client-ca\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.459619 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkz5q\" (UniqueName: \"kubernetes.io/projected/901e3706-7b64-427f-a052-b4fc84b8304e-kube-api-access-lkz5q\") pod \"openshift-apiserver-operator-796bbdcf4f-g5jg9\" (UID: \"901e3706-7b64-427f-a052-b4fc84b8304e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.464925 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.465348 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.466295 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.466423 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.466506 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.466590 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.467377 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.467583 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.483252 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.484851 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.485096 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.485723 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.485933 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.486699 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.487660 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.488253 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.488534 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.488769 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.488983 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.489957 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490153 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-trusted-ca-bundle\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490256 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-oauth-serving-cert\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490312 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eea4eb1b-17a9-468f-981b-b26d90c75221-encryption-config\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490362 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/06c6dbb7-21d9-44dd-bf69-752289a02ca4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j4vw2\" (UID: \"06c6dbb7-21d9-44dd-bf69-752289a02ca4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490410 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490440 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490473 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-oauth-config\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490507 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h27n\" (UniqueName: \"kubernetes.io/projected/06c6dbb7-21d9-44dd-bf69-752289a02ca4-kube-api-access-2h27n\") pod \"openshift-config-operator-7777fb866f-j4vw2\" (UID: \"06c6dbb7-21d9-44dd-bf69-752289a02ca4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490539 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5bba427-d8bb-4c0f-a609-4c9c556056e0-serving-cert\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490614 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f77b180-f28c-472b-a577-44ef5012100c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490668 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5bba427-d8bb-4c0f-a609-4c9c556056e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490695 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2d8724-2ac5-4542-878b-e2c9e33e8718-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zmd99\" (UID: \"4c2d8724-2ac5-4542-878b-e2c9e33e8718\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490761 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-config\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490798 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssx2n\" (UniqueName: \"kubernetes.io/projected/7f77b180-f28c-472b-a577-44ef5012100c-kube-api-access-ssx2n\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490858 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6prp\" (UniqueName: \"kubernetes.io/projected/973372a1-5f38-40b5-8837-bd2236baf511-kube-api-access-d6prp\") pod \"downloads-7954f5f757-h6k7l\" (UID: \"973372a1-5f38-40b5-8837-bd2236baf511\") " pod="openshift-console/downloads-7954f5f757-h6k7l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490920 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c781345d-eebb-4b54-98f5-2a51cdb942a0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.490995 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkkwt\" (UniqueName: \"kubernetes.io/projected/eea4eb1b-17a9-468f-981b-b26d90c75221-kube-api-access-bkkwt\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491093 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306b229e-5b0e-4c77-83ce-f95f1176dc2b-serving-cert\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491151 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c781345d-eebb-4b54-98f5-2a51cdb942a0-audit-dir\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491190 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4plsx\" (UniqueName: \"kubernetes.io/projected/e5bba427-d8bb-4c0f-a609-4c9c556056e0-kube-api-access-4plsx\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491230 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chzsn\" (UniqueName: \"kubernetes.io/projected/306b229e-5b0e-4c77-83ce-f95f1176dc2b-kube-api-access-chzsn\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491284 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eea4eb1b-17a9-468f-981b-b26d90c75221-serving-cert\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491330 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491376 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj6hq\" (UniqueName: \"kubernetes.io/projected/48288264-5766-4d38-956b-68434cf4c955-kube-api-access-lj6hq\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491412 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4fc4ca1b-b60d-484e-807e-034d452122f2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n4w6w\" (UID: \"4fc4ca1b-b60d-484e-807e-034d452122f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491450 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48288264-5766-4d38-956b-68434cf4c955-config\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491479 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/09e0ade2-2381-44e7-ad73-4bda0f48231c-machine-approver-tls\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491540 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48288264-5766-4d38-956b-68434cf4c955-serving-cert\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491577 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eea4eb1b-17a9-468f-981b-b26d90c75221-etcd-client\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491619 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5bba427-d8bb-4c0f-a609-4c9c556056e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491669 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb6kx\" (UniqueName: \"kubernetes.io/projected/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-kube-api-access-vb6kx\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491703 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491732 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmgsl\" (UniqueName: \"kubernetes.io/projected/fe3eedb0-f613-4104-ba42-a22301757402-kube-api-access-gmgsl\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491764 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c781345d-eebb-4b54-98f5-2a51cdb942a0-encryption-config\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491804 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491826 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c781345d-eebb-4b54-98f5-2a51cdb942a0-serving-cert\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.491905 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-config\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.492025 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.492103 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f77b180-f28c-472b-a577-44ef5012100c-images\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.492162 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpwr5\" (UniqueName: \"kubernetes.io/projected/4c2d8724-2ac5-4542-878b-e2c9e33e8718-kube-api-access-vpwr5\") pod \"openshift-controller-manager-operator-756b6f6bc6-zmd99\" (UID: \"4c2d8724-2ac5-4542-878b-e2c9e33e8718\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.492212 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2d8724-2ac5-4542-878b-e2c9e33e8718-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zmd99\" (UID: \"4c2d8724-2ac5-4542-878b-e2c9e33e8718\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.492269 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-config\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.494621 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.467724 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.467764 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.495046 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.497879 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.510981 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.512536 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.513607 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.515824 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zphhl"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.518062 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k7jwn"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.518104 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.518668 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.521806 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j86m7"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.522331 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mhq4b"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.522687 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wt2b6"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.523167 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.523302 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.523834 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.524895 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.525273 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.525891 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c8tgl"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.526362 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.526762 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.527148 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.530373 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.531755 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.531845 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532091 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h6k7l"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532141 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532149 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532325 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532359 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532495 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532598 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532697 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532789 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.532954 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533045 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533075 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533115 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533196 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533266 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533320 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533386 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533446 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533463 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533577 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533718 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533823 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.533833 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.534008 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.534226 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.534681 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.535255 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.535630 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.536626 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.537804 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.540541 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.541271 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.545398 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j86m7"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.547825 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-c8x8k"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.548783 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c8x8k" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.548950 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.550000 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.550216 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.551035 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.551600 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.552192 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.553059 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.553751 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.554602 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.555138 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.555817 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4zftz"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.556899 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.556957 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.567096 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.570770 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-j7jhb"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.570895 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.572352 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cptm9"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.572675 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.573141 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mw5hz"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.574262 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.574849 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ttpck"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.575366 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.575972 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wt2b6"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.576107 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.579972 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.581671 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.583388 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.584818 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.586268 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.587227 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.588387 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.589998 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.591915 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9qd6n"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593028 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306b229e-5b0e-4c77-83ce-f95f1176dc2b-serving-cert\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593062 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c781345d-eebb-4b54-98f5-2a51cdb942a0-audit-dir\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593091 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ftl6\" (UniqueName: \"kubernetes.io/projected/ee5c594f-6372-4e85-91f1-363525dd5abe-kube-api-access-8ftl6\") pod \"catalog-operator-68c6474976-7mgxz\" (UID: \"ee5c594f-6372-4e85-91f1-363525dd5abe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593114 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw9rk\" (UniqueName: \"kubernetes.io/projected/448a032a-3b5d-494b-b7e4-55298af61b9e-kube-api-access-zw9rk\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593136 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4plsx\" (UniqueName: \"kubernetes.io/projected/e5bba427-d8bb-4c0f-a609-4c9c556056e0-kube-api-access-4plsx\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593156 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/448a032a-3b5d-494b-b7e4-55298af61b9e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593175 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-trusted-ca\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593192 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69ld4\" (UniqueName: \"kubernetes.io/projected/b818ac68-9e90-4212-b658-8a946fff5cfc-kube-api-access-69ld4\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593202 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c781345d-eebb-4b54-98f5-2a51cdb942a0-audit-dir\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593225 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chzsn\" (UniqueName: \"kubernetes.io/projected/306b229e-5b0e-4c77-83ce-f95f1176dc2b-kube-api-access-chzsn\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593252 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593415 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/448a032a-3b5d-494b-b7e4-55298af61b9e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593467 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee5c594f-6372-4e85-91f1-363525dd5abe-srv-cert\") pod \"catalog-operator-68c6474976-7mgxz\" (UID: \"ee5c594f-6372-4e85-91f1-363525dd5abe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593512 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eea4eb1b-17a9-468f-981b-b26d90c75221-serving-cert\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593555 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48288264-5766-4d38-956b-68434cf4c955-config\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593606 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj6hq\" (UniqueName: \"kubernetes.io/projected/48288264-5766-4d38-956b-68434cf4c955-kube-api-access-lj6hq\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.593643 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4fc4ca1b-b60d-484e-807e-034d452122f2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n4w6w\" (UID: \"4fc4ca1b-b60d-484e-807e-034d452122f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594335 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b818ac68-9e90-4212-b658-8a946fff5cfc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594384 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/09e0ade2-2381-44e7-ad73-4bda0f48231c-machine-approver-tls\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594405 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48288264-5766-4d38-956b-68434cf4c955-serving-cert\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594428 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eea4eb1b-17a9-468f-981b-b26d90c75221-etcd-client\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594449 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5bba427-d8bb-4c0f-a609-4c9c556056e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594456 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594469 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb6kx\" (UniqueName: \"kubernetes.io/projected/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-kube-api-access-vb6kx\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594495 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594516 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c781345d-eebb-4b54-98f5-2a51cdb942a0-encryption-config\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594534 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmgsl\" (UniqueName: \"kubernetes.io/projected/fe3eedb0-f613-4104-ba42-a22301757402-kube-api-access-gmgsl\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594558 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594577 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c781345d-eebb-4b54-98f5-2a51cdb942a0-serving-cert\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594602 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b818ac68-9e90-4212-b658-8a946fff5cfc-images\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594648 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-config\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594671 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594691 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/399808fe-587e-43a8-8d03-5e8cde47c717-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fp67l\" (UID: \"399808fe-587e-43a8-8d03-5e8cde47c717\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594710 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee5c594f-6372-4e85-91f1-363525dd5abe-profile-collector-cert\") pod \"catalog-operator-68c6474976-7mgxz\" (UID: \"ee5c594f-6372-4e85-91f1-363525dd5abe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594736 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f77b180-f28c-472b-a577-44ef5012100c-images\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594761 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpwr5\" (UniqueName: \"kubernetes.io/projected/4c2d8724-2ac5-4542-878b-e2c9e33e8718-kube-api-access-vpwr5\") pod \"openshift-controller-manager-operator-756b6f6bc6-zmd99\" (UID: \"4c2d8724-2ac5-4542-878b-e2c9e33e8718\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594791 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2d8724-2ac5-4542-878b-e2c9e33e8718-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zmd99\" (UID: \"4c2d8724-2ac5-4542-878b-e2c9e33e8718\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594810 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-etcd-service-ca\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594864 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b818ac68-9e90-4212-b658-8a946fff5cfc-proxy-tls\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594893 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-config\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594911 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eea4eb1b-17a9-468f-981b-b26d90c75221-audit-dir\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594930 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/901e3706-7b64-427f-a052-b4fc84b8304e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g5jg9\" (UID: \"901e3706-7b64-427f-a052-b4fc84b8304e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594952 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-audit-policies\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594974 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5bba427-d8bb-4c0f-a609-4c9c556056e0-config\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594996 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2014ff72-c050-49a3-9186-49e1830a27be-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cj6jr\" (UID: \"2014ff72-c050-49a3-9186-49e1830a27be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.595022 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-etcd-serving-ca\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.595042 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsmkl\" (UniqueName: \"kubernetes.io/projected/2014ff72-c050-49a3-9186-49e1830a27be-kube-api-access-wsmkl\") pod \"olm-operator-6b444d44fb-cj6jr\" (UID: \"2014ff72-c050-49a3-9186-49e1830a27be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.595062 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e0ade2-2381-44e7-ad73-4bda0f48231c-config\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.595082 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe3eedb0-f613-4104-ba42-a22301757402-audit-dir\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.595117 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e3706-7b64-427f-a052-b4fc84b8304e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g5jg9\" (UID: \"901e3706-7b64-427f-a052-b4fc84b8304e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.595138 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.595268 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.594428 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48288264-5766-4d38-956b-68434cf4c955-config\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.595616 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.596428 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.596758 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/901e3706-7b64-427f-a052-b4fc84b8304e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g5jg9\" (UID: \"901e3706-7b64-427f-a052-b4fc84b8304e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.597226 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09e0ade2-2381-44e7-ad73-4bda0f48231c-config\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.597267 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe3eedb0-f613-4104-ba42-a22301757402-audit-dir\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.597360 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-etcd-serving-ca\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.597742 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f77b180-f28c-472b-a577-44ef5012100c-images\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.597968 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-config\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.598116 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-audit-policies\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.599092 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5bba427-d8bb-4c0f-a609-4c9c556056e0-config\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.599153 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eea4eb1b-17a9-468f-981b-b26d90c75221-audit-dir\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.599252 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/399808fe-587e-43a8-8d03-5e8cde47c717-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fp67l\" (UID: \"399808fe-587e-43a8-8d03-5e8cde47c717\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.599305 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-etcd-client\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.599345 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-image-import-ca\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.599380 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.600366 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c8x8k"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.600413 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.600468 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4zftz"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.600600 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4fc4ca1b-b60d-484e-807e-034d452122f2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-n4w6w\" (UID: \"4fc4ca1b-b60d-484e-807e-034d452122f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.600695 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/901e3706-7b64-427f-a052-b4fc84b8304e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g5jg9\" (UID: \"901e3706-7b64-427f-a052-b4fc84b8304e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.599253 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.600947 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-audit\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.601058 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09e0ade2-2381-44e7-ad73-4bda0f48231c-auth-proxy-config\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.601086 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48288264-5766-4d38-956b-68434cf4c955-trusted-ca\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.601106 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xx49\" (UniqueName: \"kubernetes.io/projected/c781345d-eebb-4b54-98f5-2a51cdb942a0-kube-api-access-2xx49\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.601118 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eea4eb1b-17a9-468f-981b-b26d90c75221-serving-cert\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.601722 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5bba427-d8bb-4c0f-a609-4c9c556056e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.601823 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09e0ade2-2381-44e7-ad73-4bda0f48231c-auth-proxy-config\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.602216 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306b229e-5b0e-4c77-83ce-f95f1176dc2b-serving-cert\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.602336 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/09e0ade2-2381-44e7-ad73-4bda0f48231c-machine-approver-tls\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.602415 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.602523 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-config\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.602571 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcpp4\" (UniqueName: \"kubernetes.io/projected/aa46ae91-4a07-4663-82c7-ba8c6fac622e-kube-api-access-lcpp4\") pod \"dns-operator-744455d44c-dsnmh\" (UID: \"aa46ae91-4a07-4663-82c7-ba8c6fac622e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.602608 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-config\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.602780 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48288264-5766-4d38-956b-68434cf4c955-trusted-ca\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.602876 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c6dbb7-21d9-44dd-bf69-752289a02ca4-serving-cert\") pod \"openshift-config-operator-7777fb866f-j4vw2\" (UID: \"06c6dbb7-21d9-44dd-bf69-752289a02ca4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.602909 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c781345d-eebb-4b54-98f5-2a51cdb942a0-audit-policies\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603035 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eea4eb1b-17a9-468f-981b-b26d90c75221-node-pullsecrets\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603060 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603090 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603119 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-etcd-ca\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603176 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/eea4eb1b-17a9-468f-981b-b26d90c75221-node-pullsecrets\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603223 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwkg\" (UniqueName: \"kubernetes.io/projected/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-kube-api-access-jmwkg\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603263 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa46ae91-4a07-4663-82c7-ba8c6fac622e-metrics-tls\") pod \"dns-operator-744455d44c-dsnmh\" (UID: \"aa46ae91-4a07-4663-82c7-ba8c6fac622e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603292 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603357 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-serving-cert\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603382 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdpwq\" (UniqueName: \"kubernetes.io/projected/83b64ec0-5648-49b2-9e7e-32834c30e7a9-kube-api-access-tdpwq\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603403 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-service-ca\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603426 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-serving-cert\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603463 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603495 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-config\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.603958 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604102 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-image-import-ca\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604186 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2014ff72-c050-49a3-9186-49e1830a27be-srv-cert\") pod \"olm-operator-6b444d44fb-cj6jr\" (UID: \"2014ff72-c050-49a3-9186-49e1830a27be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604257 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b64ec0-5648-49b2-9e7e-32834c30e7a9-serving-cert\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604292 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5t9g\" (UniqueName: \"kubernetes.io/projected/09e0ade2-2381-44e7-ad73-4bda0f48231c-kube-api-access-m5t9g\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604370 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604383 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f77b180-f28c-472b-a577-44ef5012100c-config\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604420 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c781345d-eebb-4b54-98f5-2a51cdb942a0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604450 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-config\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604473 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgmzk\" (UniqueName: \"kubernetes.io/projected/4fc4ca1b-b60d-484e-807e-034d452122f2-kube-api-access-rgmzk\") pod \"cluster-samples-operator-665b6dd947-n4w6w\" (UID: \"4fc4ca1b-b60d-484e-807e-034d452122f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604495 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c781345d-eebb-4b54-98f5-2a51cdb942a0-etcd-client\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604517 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-client-ca\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604541 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604676 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-trusted-ca-bundle\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604702 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-oauth-serving-cert\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604732 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-client-ca\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604757 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkz5q\" (UniqueName: \"kubernetes.io/projected/901e3706-7b64-427f-a052-b4fc84b8304e-kube-api-access-lkz5q\") pod \"openshift-apiserver-operator-796bbdcf4f-g5jg9\" (UID: \"901e3706-7b64-427f-a052-b4fc84b8304e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604780 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/06c6dbb7-21d9-44dd-bf69-752289a02ca4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j4vw2\" (UID: \"06c6dbb7-21d9-44dd-bf69-752289a02ca4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604799 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eea4eb1b-17a9-468f-981b-b26d90c75221-encryption-config\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.604819 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.605051 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-service-ca\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.605483 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-client-ca\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.605792 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c781345d-eebb-4b54-98f5-2a51cdb942a0-audit-policies\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.606015 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c781345d-eebb-4b54-98f5-2a51cdb942a0-serving-cert\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.606484 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/06c6dbb7-21d9-44dd-bf69-752289a02ca4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-j4vw2\" (UID: \"06c6dbb7-21d9-44dd-bf69-752289a02ca4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.606655 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gh4j\" (UniqueName: \"kubernetes.io/projected/8d4cb9cf-5e0f-46a9-adce-c54aaef43120-kube-api-access-8gh4j\") pod \"multus-admission-controller-857f4d67dd-c8tgl\" (UID: \"8d4cb9cf-5e0f-46a9-adce-c54aaef43120\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.606669 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-config\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.606675 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607001 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c781345d-eebb-4b54-98f5-2a51cdb942a0-encryption-config\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607252 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mw5hz"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607351 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607387 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607424 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h27n\" (UniqueName: \"kubernetes.io/projected/06c6dbb7-21d9-44dd-bf69-752289a02ca4-kube-api-access-2h27n\") pod \"openshift-config-operator-7777fb866f-j4vw2\" (UID: \"06c6dbb7-21d9-44dd-bf69-752289a02ca4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607429 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f77b180-f28c-472b-a577-44ef5012100c-config\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607456 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5bba427-d8bb-4c0f-a609-4c9c556056e0-serving-cert\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607482 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-oauth-config\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607581 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48288264-5766-4d38-956b-68434cf4c955-serving-cert\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.607757 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608359 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608401 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pnfj\" (UniqueName: \"kubernetes.io/projected/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-kube-api-access-5pnfj\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608558 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f77b180-f28c-472b-a577-44ef5012100c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608594 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5bba427-d8bb-4c0f-a609-4c9c556056e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608622 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2d8724-2ac5-4542-878b-e2c9e33e8718-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zmd99\" (UID: \"4c2d8724-2ac5-4542-878b-e2c9e33e8718\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608706 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d4cb9cf-5e0f-46a9-adce-c54aaef43120-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8tgl\" (UID: \"8d4cb9cf-5e0f-46a9-adce-c54aaef43120\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608727 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/448a032a-3b5d-494b-b7e4-55298af61b9e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608750 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-metrics-tls\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608783 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-config\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608805 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkkwt\" (UniqueName: \"kubernetes.io/projected/eea4eb1b-17a9-468f-981b-b26d90c75221-kube-api-access-bkkwt\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608799 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-oauth-serving-cert\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608827 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/399808fe-587e-43a8-8d03-5e8cde47c717-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fp67l\" (UID: \"399808fe-587e-43a8-8d03-5e8cde47c717\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608839 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-trusted-ca-bundle\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608847 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssx2n\" (UniqueName: \"kubernetes.io/projected/7f77b180-f28c-472b-a577-44ef5012100c-kube-api-access-ssx2n\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608921 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6prp\" (UniqueName: \"kubernetes.io/projected/973372a1-5f38-40b5-8837-bd2236baf511-kube-api-access-d6prp\") pod \"downloads-7954f5f757-h6k7l\" (UID: \"973372a1-5f38-40b5-8837-bd2236baf511\") " pod="openshift-console/downloads-7954f5f757-h6k7l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.608953 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c781345d-eebb-4b54-98f5-2a51cdb942a0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.609213 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06c6dbb7-21d9-44dd-bf69-752289a02ca4-serving-cert\") pod \"openshift-config-operator-7777fb866f-j4vw2\" (UID: \"06c6dbb7-21d9-44dd-bf69-752289a02ca4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.609449 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c2d8724-2ac5-4542-878b-e2c9e33e8718-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zmd99\" (UID: \"4c2d8724-2ac5-4542-878b-e2c9e33e8718\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.609455 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c8tgl"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.610074 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/eea4eb1b-17a9-468f-981b-b26d90c75221-audit\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.610164 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.610281 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-serving-cert\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.610495 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.610880 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.611218 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-serving-cert\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.611541 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5bba427-d8bb-4c0f-a609-4c9c556056e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.611651 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/eea4eb1b-17a9-468f-981b-b26d90c75221-encryption-config\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.611694 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.611958 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aa46ae91-4a07-4663-82c7-ba8c6fac622e-metrics-tls\") pod \"dns-operator-744455d44c-dsnmh\" (UID: \"aa46ae91-4a07-4663-82c7-ba8c6fac622e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.612443 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-oauth-config\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.612561 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.612850 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c781345d-eebb-4b54-98f5-2a51cdb942a0-etcd-client\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.613231 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-client-ca\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.613306 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c781345d-eebb-4b54-98f5-2a51cdb942a0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.613856 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-config\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.613920 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cptm9"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.614113 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.614344 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f77b180-f28c-472b-a577-44ef5012100c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.614788 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/eea4eb1b-17a9-468f-981b-b26d90c75221-etcd-client\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.614907 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5bba427-d8bb-4c0f-a609-4c9c556056e0-serving-cert\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.615047 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.615100 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c781345d-eebb-4b54-98f5-2a51cdb942a0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.615879 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c2d8724-2ac5-4542-878b-e2c9e33e8718-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zmd99\" (UID: \"4c2d8724-2ac5-4542-878b-e2c9e33e8718\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.617421 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.617791 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b64ec0-5648-49b2-9e7e-32834c30e7a9-serving-cert\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.617992 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.619749 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.621442 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.622699 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ttpck"] Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.625616 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.646294 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.666568 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.685937 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.706149 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.709875 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-etcd-ca\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.709919 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmwkg\" (UniqueName: \"kubernetes.io/projected/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-kube-api-access-jmwkg\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.709965 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2014ff72-c050-49a3-9186-49e1830a27be-srv-cert\") pod \"olm-operator-6b444d44fb-cj6jr\" (UID: \"2014ff72-c050-49a3-9186-49e1830a27be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710032 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710055 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gh4j\" (UniqueName: \"kubernetes.io/projected/8d4cb9cf-5e0f-46a9-adce-c54aaef43120-kube-api-access-8gh4j\") pod \"multus-admission-controller-857f4d67dd-c8tgl\" (UID: \"8d4cb9cf-5e0f-46a9-adce-c54aaef43120\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710084 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pnfj\" (UniqueName: \"kubernetes.io/projected/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-kube-api-access-5pnfj\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710119 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d4cb9cf-5e0f-46a9-adce-c54aaef43120-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8tgl\" (UID: \"8d4cb9cf-5e0f-46a9-adce-c54aaef43120\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710145 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/448a032a-3b5d-494b-b7e4-55298af61b9e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710169 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-metrics-tls\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710205 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/399808fe-587e-43a8-8d03-5e8cde47c717-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fp67l\" (UID: \"399808fe-587e-43a8-8d03-5e8cde47c717\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710239 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ftl6\" (UniqueName: \"kubernetes.io/projected/ee5c594f-6372-4e85-91f1-363525dd5abe-kube-api-access-8ftl6\") pod \"catalog-operator-68c6474976-7mgxz\" (UID: \"ee5c594f-6372-4e85-91f1-363525dd5abe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710274 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw9rk\" (UniqueName: \"kubernetes.io/projected/448a032a-3b5d-494b-b7e4-55298af61b9e-kube-api-access-zw9rk\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710305 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/448a032a-3b5d-494b-b7e4-55298af61b9e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710328 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-trusted-ca\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710350 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69ld4\" (UniqueName: \"kubernetes.io/projected/b818ac68-9e90-4212-b658-8a946fff5cfc-kube-api-access-69ld4\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710370 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/448a032a-3b5d-494b-b7e4-55298af61b9e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710387 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee5c594f-6372-4e85-91f1-363525dd5abe-srv-cert\") pod \"catalog-operator-68c6474976-7mgxz\" (UID: \"ee5c594f-6372-4e85-91f1-363525dd5abe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710408 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b818ac68-9e90-4212-b658-8a946fff5cfc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710442 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b818ac68-9e90-4212-b658-8a946fff5cfc-images\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710459 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/399808fe-587e-43a8-8d03-5e8cde47c717-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fp67l\" (UID: \"399808fe-587e-43a8-8d03-5e8cde47c717\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710474 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee5c594f-6372-4e85-91f1-363525dd5abe-profile-collector-cert\") pod \"catalog-operator-68c6474976-7mgxz\" (UID: \"ee5c594f-6372-4e85-91f1-363525dd5abe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710500 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-etcd-service-ca\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710515 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b818ac68-9e90-4212-b658-8a946fff5cfc-proxy-tls\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710532 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2014ff72-c050-49a3-9186-49e1830a27be-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cj6jr\" (UID: \"2014ff72-c050-49a3-9186-49e1830a27be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710546 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsmkl\" (UniqueName: \"kubernetes.io/projected/2014ff72-c050-49a3-9186-49e1830a27be-kube-api-access-wsmkl\") pod \"olm-operator-6b444d44fb-cj6jr\" (UID: \"2014ff72-c050-49a3-9186-49e1830a27be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710571 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-etcd-client\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.710589 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/399808fe-587e-43a8-8d03-5e8cde47c717-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fp67l\" (UID: \"399808fe-587e-43a8-8d03-5e8cde47c717\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.712327 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/448a032a-3b5d-494b-b7e4-55298af61b9e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.712990 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-etcd-ca\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.713551 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-etcd-service-ca\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.713849 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/399808fe-587e-43a8-8d03-5e8cde47c717-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fp67l\" (UID: \"399808fe-587e-43a8-8d03-5e8cde47c717\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.714397 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b818ac68-9e90-4212-b658-8a946fff5cfc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.715963 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/448a032a-3b5d-494b-b7e4-55298af61b9e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.717555 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-etcd-client\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.717686 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/399808fe-587e-43a8-8d03-5e8cde47c717-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fp67l\" (UID: \"399808fe-587e-43a8-8d03-5e8cde47c717\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.718004 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8d4cb9cf-5e0f-46a9-adce-c54aaef43120-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-c8tgl\" (UID: \"8d4cb9cf-5e0f-46a9-adce-c54aaef43120\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.726627 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.745893 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.767116 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.787603 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.800040 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-metrics-tls\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.811523 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.835023 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.843029 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-trusted-ca\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.845724 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.866908 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.887197 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.907761 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.931497 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.947083 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.966049 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.987076 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 08:25:21 crc kubenswrapper[4992]: I1211 08:25:21.998769 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b818ac68-9e90-4212-b658-8a946fff5cfc-proxy-tls\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.007243 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.014011 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b818ac68-9e90-4212-b658-8a946fff5cfc-images\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.047718 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.066514 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.087137 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.107519 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.126625 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.147307 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.166536 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.186838 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.207684 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.218093 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee5c594f-6372-4e85-91f1-363525dd5abe-profile-collector-cert\") pod \"catalog-operator-68c6474976-7mgxz\" (UID: \"ee5c594f-6372-4e85-91f1-363525dd5abe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.219010 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2014ff72-c050-49a3-9186-49e1830a27be-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cj6jr\" (UID: \"2014ff72-c050-49a3-9186-49e1830a27be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.226739 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.238181 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2014ff72-c050-49a3-9186-49e1830a27be-srv-cert\") pod \"olm-operator-6b444d44fb-cj6jr\" (UID: \"2014ff72-c050-49a3-9186-49e1830a27be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.246118 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.265797 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.287787 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.306860 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.328785 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.337276 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee5c594f-6372-4e85-91f1-363525dd5abe-srv-cert\") pod \"catalog-operator-68c6474976-7mgxz\" (UID: \"ee5c594f-6372-4e85-91f1-363525dd5abe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.347507 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.366809 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.387066 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.415520 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.427421 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.445774 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.487152 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.507624 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.527023 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.546467 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.566073 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.585146 4992 request.go:700] Waited for 1.010452805s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0 Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.587775 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.607994 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.627686 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.647335 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.673097 4992 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.686898 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.706605 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.726438 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.747236 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.767455 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.787542 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.837093 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4plsx\" (UniqueName: \"kubernetes.io/projected/e5bba427-d8bb-4c0f-a609-4c9c556056e0-kube-api-access-4plsx\") pod \"authentication-operator-69f744f599-zphhl\" (UID: \"e5bba427-d8bb-4c0f-a609-4c9c556056e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.860629 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chzsn\" (UniqueName: \"kubernetes.io/projected/306b229e-5b0e-4c77-83ce-f95f1176dc2b-kube-api-access-chzsn\") pod \"route-controller-manager-6576b87f9c-wjwg6\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.863103 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.875797 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj6hq\" (UniqueName: \"kubernetes.io/projected/48288264-5766-4d38-956b-68434cf4c955-kube-api-access-lj6hq\") pod \"console-operator-58897d9998-q5n2b\" (UID: \"48288264-5766-4d38-956b-68434cf4c955\") " pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.896492 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpwr5\" (UniqueName: \"kubernetes.io/projected/4c2d8724-2ac5-4542-878b-e2c9e33e8718-kube-api-access-vpwr5\") pod \"openshift-controller-manager-operator-756b6f6bc6-zmd99\" (UID: \"4c2d8724-2ac5-4542-878b-e2c9e33e8718\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.909887 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmgsl\" (UniqueName: \"kubernetes.io/projected/fe3eedb0-f613-4104-ba42-a22301757402-kube-api-access-gmgsl\") pod \"oauth-openshift-558db77b4-gwnxp\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.935014 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb6kx\" (UniqueName: \"kubernetes.io/projected/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-kube-api-access-vb6kx\") pod \"console-f9d7485db-9lslc\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.943232 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xx49\" (UniqueName: \"kubernetes.io/projected/c781345d-eebb-4b54-98f5-2a51cdb942a0-kube-api-access-2xx49\") pod \"apiserver-7bbb656c7d-9xqnr\" (UID: \"c781345d-eebb-4b54-98f5-2a51cdb942a0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.949176 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.965253 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcpp4\" (UniqueName: \"kubernetes.io/projected/aa46ae91-4a07-4663-82c7-ba8c6fac622e-kube-api-access-lcpp4\") pod \"dns-operator-744455d44c-dsnmh\" (UID: \"aa46ae91-4a07-4663-82c7-ba8c6fac622e\") " pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.997406 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:22 crc kubenswrapper[4992]: I1211 08:25:22.997936 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdpwq\" (UniqueName: \"kubernetes.io/projected/83b64ec0-5648-49b2-9e7e-32834c30e7a9-kube-api-access-tdpwq\") pod \"controller-manager-879f6c89f-46cbx\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.009982 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5t9g\" (UniqueName: \"kubernetes.io/projected/09e0ade2-2381-44e7-ad73-4bda0f48231c-kube-api-access-m5t9g\") pod \"machine-approver-56656f9798-v8zp9\" (UID: \"09e0ade2-2381-44e7-ad73-4bda0f48231c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.015387 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.030111 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgmzk\" (UniqueName: \"kubernetes.io/projected/4fc4ca1b-b60d-484e-807e-034d452122f2-kube-api-access-rgmzk\") pod \"cluster-samples-operator-665b6dd947-n4w6w\" (UID: \"4fc4ca1b-b60d-484e-807e-034d452122f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.034883 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.048100 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.054409 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkz5q\" (UniqueName: \"kubernetes.io/projected/901e3706-7b64-427f-a052-b4fc84b8304e-kube-api-access-lkz5q\") pod \"openshift-apiserver-operator-796bbdcf4f-g5jg9\" (UID: \"901e3706-7b64-427f-a052-b4fc84b8304e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.066912 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h27n\" (UniqueName: \"kubernetes.io/projected/06c6dbb7-21d9-44dd-bf69-752289a02ca4-kube-api-access-2h27n\") pod \"openshift-config-operator-7777fb866f-j4vw2\" (UID: \"06c6dbb7-21d9-44dd-bf69-752289a02ca4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.089817 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.090257 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.105916 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6prp\" (UniqueName: \"kubernetes.io/projected/973372a1-5f38-40b5-8837-bd2236baf511-kube-api-access-d6prp\") pod \"downloads-7954f5f757-h6k7l\" (UID: \"973372a1-5f38-40b5-8837-bd2236baf511\") " pod="openshift-console/downloads-7954f5f757-h6k7l" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.106958 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssx2n\" (UniqueName: \"kubernetes.io/projected/7f77b180-f28c-472b-a577-44ef5012100c-kube-api-access-ssx2n\") pod \"machine-api-operator-5694c8668f-g5d6r\" (UID: \"7f77b180-f28c-472b-a577-44ef5012100c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.126952 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkkwt\" (UniqueName: \"kubernetes.io/projected/eea4eb1b-17a9-468f-981b-b26d90c75221-kube-api-access-bkkwt\") pod \"apiserver-76f77b778f-k7jwn\" (UID: \"eea4eb1b-17a9-468f-981b-b26d90c75221\") " pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.138400 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zphhl"] Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.148317 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69ld4\" (UniqueName: \"kubernetes.io/projected/b818ac68-9e90-4212-b658-8a946fff5cfc-kube-api-access-69ld4\") pod \"machine-config-operator-74547568cd-mnx8r\" (UID: \"b818ac68-9e90-4212-b658-8a946fff5cfc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.153855 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.163231 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmwkg\" (UniqueName: \"kubernetes.io/projected/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-kube-api-access-jmwkg\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.169475 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.174988 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.178973 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.180203 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.183788 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3a55f3a6-32d8-41da-b6c7-fa6fc282ae16-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zjl92\" (UID: \"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.211216 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gh4j\" (UniqueName: \"kubernetes.io/projected/8d4cb9cf-5e0f-46a9-adce-c54aaef43120-kube-api-access-8gh4j\") pod \"multus-admission-controller-857f4d67dd-c8tgl\" (UID: \"8d4cb9cf-5e0f-46a9-adce-c54aaef43120\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.212893 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.219795 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr"] Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.225533 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pnfj\" (UniqueName: \"kubernetes.io/projected/007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32-kube-api-access-5pnfj\") pod \"etcd-operator-b45778765-9qd6n\" (UID: \"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.246809 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsmkl\" (UniqueName: \"kubernetes.io/projected/2014ff72-c050-49a3-9186-49e1830a27be-kube-api-access-wsmkl\") pod \"olm-operator-6b444d44fb-cj6jr\" (UID: \"2014ff72-c050-49a3-9186-49e1830a27be\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.249545 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6"] Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.263792 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/448a032a-3b5d-494b-b7e4-55298af61b9e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.271291 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.286767 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.296008 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/399808fe-587e-43a8-8d03-5e8cde47c717-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fp67l\" (UID: \"399808fe-587e-43a8-8d03-5e8cde47c717\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.309945 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ftl6\" (UniqueName: \"kubernetes.io/projected/ee5c594f-6372-4e85-91f1-363525dd5abe-kube-api-access-8ftl6\") pod \"catalog-operator-68c6474976-7mgxz\" (UID: \"ee5c594f-6372-4e85-91f1-363525dd5abe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.325249 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.326339 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw9rk\" (UniqueName: \"kubernetes.io/projected/448a032a-3b5d-494b-b7e4-55298af61b9e-kube-api-access-zw9rk\") pod \"cluster-image-registry-operator-dc59b4c8b-mjjwq\" (UID: \"448a032a-3b5d-494b-b7e4-55298af61b9e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.341145 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.341424 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.341503 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.341538 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.341574 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:23 crc kubenswrapper[4992]: E1211 08:25:23.342337 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:27:25.342286764 +0000 UTC m=+269.601760700 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.342488 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.346491 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.350021 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.353997 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.361217 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-h6k7l" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.366411 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.368938 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" event={"ID":"c781345d-eebb-4b54-98f5-2a51cdb942a0","Type":"ContainerStarted","Data":"db1dc9db128ef6b5eda9a5e89de5953bad042a1ae6dc5293daca58af0b95830e"} Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.372240 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.372591 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" event={"ID":"e5bba427-d8bb-4c0f-a609-4c9c556056e0","Type":"ContainerStarted","Data":"c447674bb7a9e99a7b1e0aff263225bc1847bcee3994d945f9c53fd9f69f476b"} Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.376858 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" event={"ID":"09e0ade2-2381-44e7-ad73-4bda0f48231c","Type":"ContainerStarted","Data":"5c69b308cadab259f0766341294adeca0dda10b4092d8eaa8562fdf5cb9d39e4"} Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.403761 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.404295 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.460916 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.461864 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-registration-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.462578 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr4zb\" (UniqueName: \"kubernetes.io/projected/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-kube-api-access-rr4zb\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.462842 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-csi-data-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.462883 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw5kc\" (UniqueName: \"kubernetes.io/projected/f495c66f-c76e-41c7-a70b-71c7a19c8c6a-kube-api-access-tw5kc\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jmlp\" (UID: \"f495c66f-c76e-41c7-a70b-71c7a19c8c6a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.462945 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f3cebfd-0a22-4492-8a8b-31bc3db6d184-config\") pod \"kube-controller-manager-operator-78b949d7b-bbzdz\" (UID: \"4f3cebfd-0a22-4492-8a8b-31bc3db6d184\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463233 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463262 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvmrw\" (UniqueName: \"kubernetes.io/projected/bb32aae5-cae9-4de0-9bd8-d86ba8192322-kube-api-access-rvmrw\") pod \"service-ca-9c57cc56f-cptm9\" (UID: \"bb32aae5-cae9-4de0-9bd8-d86ba8192322\") " pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463317 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ba830cd-271c-4672-91e5-37a40ce9b87e-metrics-tls\") pod \"dns-default-wt2b6\" (UID: \"7ba830cd-271c-4672-91e5-37a40ce9b87e\") " pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463334 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkl7b\" (UniqueName: \"kubernetes.io/projected/7ba830cd-271c-4672-91e5-37a40ce9b87e-kube-api-access-bkl7b\") pod \"dns-default-wt2b6\" (UID: \"7ba830cd-271c-4672-91e5-37a40ce9b87e\") " pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463403 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f8920d-7c08-4dbe-a3ca-b716ac949eda-metrics-certs\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463420 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-bound-sa-token\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463446 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72d71b86-f77b-4c9b-8f3d-3f6938628c28-cert\") pod \"ingress-canary-c8x8k\" (UID: \"72d71b86-f77b-4c9b-8f3d-3f6938628c28\") " pod="openshift-ingress-canary/ingress-canary-c8x8k" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463481 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p2zd\" (UniqueName: \"kubernetes.io/projected/756b9d0e-106d-477a-b3a3-5c74ee4b5e54-kube-api-access-5p2zd\") pod \"migrator-59844c95c7-dgvjf\" (UID: \"756b9d0e-106d-477a-b3a3-5c74ee4b5e54\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463530 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f495c66f-c76e-41c7-a70b-71c7a19c8c6a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jmlp\" (UID: \"f495c66f-c76e-41c7-a70b-71c7a19c8c6a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463548 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhn4g\" (UniqueName: \"kubernetes.io/projected/72d71b86-f77b-4c9b-8f3d-3f6938628c28-kube-api-access-xhn4g\") pod \"ingress-canary-c8x8k\" (UID: \"72d71b86-f77b-4c9b-8f3d-3f6938628c28\") " pod="openshift-ingress-canary/ingress-canary-c8x8k" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463564 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1b199db1-eba8-45d0-961a-4183b059941d-certs\") pod \"machine-config-server-j7jhb\" (UID: \"1b199db1-eba8-45d0-961a-4183b059941d\") " pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463596 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7656745a-706d-4652-9db6-e94237d4999c-config-volume\") pod \"collect-profiles-29424015-cwvkq\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463624 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-trusted-ca\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463654 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-plugins-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463671 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f3cebfd-0a22-4492-8a8b-31bc3db6d184-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bbzdz\" (UID: \"4f3cebfd-0a22-4492-8a8b-31bc3db6d184\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463714 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5vgk\" (UniqueName: \"kubernetes.io/projected/7656745a-706d-4652-9db6-e94237d4999c-kube-api-access-m5vgk\") pod \"collect-profiles-29424015-cwvkq\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463756 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4zftz\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463774 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d293af14-4779-4bfc-a8ed-6cfed6974f57-proxy-tls\") pod \"machine-config-controller-84d6567774-6dwf8\" (UID: \"d293af14-4779-4bfc-a8ed-6cfed6974f57\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463806 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f3cebfd-0a22-4492-8a8b-31bc3db6d184-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bbzdz\" (UID: \"4f3cebfd-0a22-4492-8a8b-31bc3db6d184\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463823 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f8920d-7c08-4dbe-a3ca-b716ac949eda-service-ca-bundle\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463884 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjbfd\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-kube-api-access-fjbfd\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463904 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e73e5676-61a1-4154-8943-fd3181ec993c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7qlnk\" (UID: \"e73e5676-61a1-4154-8943-fd3181ec993c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463930 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f7e2c9-4095-4001-8011-294c42e8d198-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7fdc9\" (UID: \"e5f7e2c9-4095-4001-8011-294c42e8d198\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463947 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21edf877-bf36-4cb4-8fcd-43751d4c4a04-apiservice-cert\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.463993 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s5fk\" (UniqueName: \"kubernetes.io/projected/d293af14-4779-4bfc-a8ed-6cfed6974f57-kube-api-access-6s5fk\") pod \"machine-config-controller-84d6567774-6dwf8\" (UID: \"d293af14-4779-4bfc-a8ed-6cfed6974f57\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464035 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-mountpoint-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464054 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n22w2\" (UniqueName: \"kubernetes.io/projected/ecb1dd13-cc68-4571-96ba-b0855381def6-kube-api-access-n22w2\") pod \"service-ca-operator-777779d784-ttpck\" (UID: \"ecb1dd13-cc68-4571-96ba-b0855381def6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464098 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-tls\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464115 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkl42\" (UniqueName: \"kubernetes.io/projected/e5f7e2c9-4095-4001-8011-294c42e8d198-kube-api-access-hkl42\") pod \"kube-storage-version-migrator-operator-b67b599dd-7fdc9\" (UID: \"e5f7e2c9-4095-4001-8011-294c42e8d198\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464132 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bb32aae5-cae9-4de0-9bd8-d86ba8192322-signing-key\") pod \"service-ca-9c57cc56f-cptm9\" (UID: \"bb32aae5-cae9-4de0-9bd8-d86ba8192322\") " pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464156 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfrs\" (UniqueName: \"kubernetes.io/projected/e73e5676-61a1-4154-8943-fd3181ec993c-kube-api-access-5hfrs\") pod \"package-server-manager-789f6589d5-7qlnk\" (UID: \"e73e5676-61a1-4154-8943-fd3181ec993c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464180 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464197 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bb32aae5-cae9-4de0-9bd8-d86ba8192322-signing-cabundle\") pod \"service-ca-9c57cc56f-cptm9\" (UID: \"bb32aae5-cae9-4de0-9bd8-d86ba8192322\") " pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464232 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f7e2c9-4095-4001-8011-294c42e8d198-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7fdc9\" (UID: \"e5f7e2c9-4095-4001-8011-294c42e8d198\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464260 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21edf877-bf36-4cb4-8fcd-43751d4c4a04-webhook-cert\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464279 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tz2h\" (UniqueName: \"kubernetes.io/projected/07389d03-2315-4483-b6bc-c25d2fb69f53-kube-api-access-9tz2h\") pod \"marketplace-operator-79b997595-4zftz\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464323 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d293af14-4779-4bfc-a8ed-6cfed6974f57-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6dwf8\" (UID: \"d293af14-4779-4bfc-a8ed-6cfed6974f57\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464341 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742bc268-8247-4c7f-8cbd-7dcd2e6bd27a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-78d4c\" (UID: \"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464363 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb1dd13-cc68-4571-96ba-b0855381def6-config\") pod \"service-ca-operator-777779d784-ttpck\" (UID: \"ecb1dd13-cc68-4571-96ba-b0855381def6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464646 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pknqg\" (UniqueName: \"kubernetes.io/projected/21edf877-bf36-4cb4-8fcd-43751d4c4a04-kube-api-access-pknqg\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464692 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742bc268-8247-4c7f-8cbd-7dcd2e6bd27a-config\") pod \"kube-apiserver-operator-766d6c64bb-78d4c\" (UID: \"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464709 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-certificates\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464754 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8znr\" (UniqueName: \"kubernetes.io/projected/56f8920d-7c08-4dbe-a3ca-b716ac949eda-kube-api-access-z8znr\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464799 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21edf877-bf36-4cb4-8fcd-43751d4c4a04-tmpfs\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464846 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/56f8920d-7c08-4dbe-a3ca-b716ac949eda-stats-auth\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464882 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ba830cd-271c-4672-91e5-37a40ce9b87e-config-volume\") pod \"dns-default-wt2b6\" (UID: \"7ba830cd-271c-4672-91e5-37a40ce9b87e\") " pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464959 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4zftz\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.464983 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-socket-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.465000 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb1dd13-cc68-4571-96ba-b0855381def6-serving-cert\") pod \"service-ca-operator-777779d784-ttpck\" (UID: \"ecb1dd13-cc68-4571-96ba-b0855381def6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.465039 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/742bc268-8247-4c7f-8cbd-7dcd2e6bd27a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-78d4c\" (UID: \"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.465056 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/56f8920d-7c08-4dbe-a3ca-b716ac949eda-default-certificate\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.465072 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1b199db1-eba8-45d0-961a-4183b059941d-node-bootstrap-token\") pod \"machine-config-server-j7jhb\" (UID: \"1b199db1-eba8-45d0-961a-4183b059941d\") " pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.465106 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.465128 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7656745a-706d-4652-9db6-e94237d4999c-secret-volume\") pod \"collect-profiles-29424015-cwvkq\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.465176 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhmrp\" (UniqueName: \"kubernetes.io/projected/1b199db1-eba8-45d0-961a-4183b059941d-kube-api-access-bhmrp\") pod \"machine-config-server-j7jhb\" (UID: \"1b199db1-eba8-45d0-961a-4183b059941d\") " pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: E1211 08:25:23.466930 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:23.966911674 +0000 UTC m=+148.226385600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.536727 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.536908 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.548590 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.572949 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573152 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4zftz\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573181 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-socket-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573213 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb1dd13-cc68-4571-96ba-b0855381def6-serving-cert\") pod \"service-ca-operator-777779d784-ttpck\" (UID: \"ecb1dd13-cc68-4571-96ba-b0855381def6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573235 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/742bc268-8247-4c7f-8cbd-7dcd2e6bd27a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-78d4c\" (UID: \"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573255 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/56f8920d-7c08-4dbe-a3ca-b716ac949eda-default-certificate\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573277 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1b199db1-eba8-45d0-961a-4183b059941d-node-bootstrap-token\") pod \"machine-config-server-j7jhb\" (UID: \"1b199db1-eba8-45d0-961a-4183b059941d\") " pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573297 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573317 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7656745a-706d-4652-9db6-e94237d4999c-secret-volume\") pod \"collect-profiles-29424015-cwvkq\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573336 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhmrp\" (UniqueName: \"kubernetes.io/projected/1b199db1-eba8-45d0-961a-4183b059941d-kube-api-access-bhmrp\") pod \"machine-config-server-j7jhb\" (UID: \"1b199db1-eba8-45d0-961a-4183b059941d\") " pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573358 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-registration-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573378 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr4zb\" (UniqueName: \"kubernetes.io/projected/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-kube-api-access-rr4zb\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573401 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-csi-data-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573420 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw5kc\" (UniqueName: \"kubernetes.io/projected/f495c66f-c76e-41c7-a70b-71c7a19c8c6a-kube-api-access-tw5kc\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jmlp\" (UID: \"f495c66f-c76e-41c7-a70b-71c7a19c8c6a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573441 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f3cebfd-0a22-4492-8a8b-31bc3db6d184-config\") pod \"kube-controller-manager-operator-78b949d7b-bbzdz\" (UID: \"4f3cebfd-0a22-4492-8a8b-31bc3db6d184\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573474 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvmrw\" (UniqueName: \"kubernetes.io/projected/bb32aae5-cae9-4de0-9bd8-d86ba8192322-kube-api-access-rvmrw\") pod \"service-ca-9c57cc56f-cptm9\" (UID: \"bb32aae5-cae9-4de0-9bd8-d86ba8192322\") " pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573495 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ba830cd-271c-4672-91e5-37a40ce9b87e-metrics-tls\") pod \"dns-default-wt2b6\" (UID: \"7ba830cd-271c-4672-91e5-37a40ce9b87e\") " pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573514 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkl7b\" (UniqueName: \"kubernetes.io/projected/7ba830cd-271c-4672-91e5-37a40ce9b87e-kube-api-access-bkl7b\") pod \"dns-default-wt2b6\" (UID: \"7ba830cd-271c-4672-91e5-37a40ce9b87e\") " pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573535 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f8920d-7c08-4dbe-a3ca-b716ac949eda-metrics-certs\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573555 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-bound-sa-token\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573570 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72d71b86-f77b-4c9b-8f3d-3f6938628c28-cert\") pod \"ingress-canary-c8x8k\" (UID: \"72d71b86-f77b-4c9b-8f3d-3f6938628c28\") " pod="openshift-ingress-canary/ingress-canary-c8x8k" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573592 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p2zd\" (UniqueName: \"kubernetes.io/projected/756b9d0e-106d-477a-b3a3-5c74ee4b5e54-kube-api-access-5p2zd\") pod \"migrator-59844c95c7-dgvjf\" (UID: \"756b9d0e-106d-477a-b3a3-5c74ee4b5e54\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573616 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f495c66f-c76e-41c7-a70b-71c7a19c8c6a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jmlp\" (UID: \"f495c66f-c76e-41c7-a70b-71c7a19c8c6a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573652 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhn4g\" (UniqueName: \"kubernetes.io/projected/72d71b86-f77b-4c9b-8f3d-3f6938628c28-kube-api-access-xhn4g\") pod \"ingress-canary-c8x8k\" (UID: \"72d71b86-f77b-4c9b-8f3d-3f6938628c28\") " pod="openshift-ingress-canary/ingress-canary-c8x8k" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573669 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1b199db1-eba8-45d0-961a-4183b059941d-certs\") pod \"machine-config-server-j7jhb\" (UID: \"1b199db1-eba8-45d0-961a-4183b059941d\") " pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573699 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7656745a-706d-4652-9db6-e94237d4999c-config-volume\") pod \"collect-profiles-29424015-cwvkq\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573719 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-trusted-ca\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573735 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-plugins-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573755 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f3cebfd-0a22-4492-8a8b-31bc3db6d184-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bbzdz\" (UID: \"4f3cebfd-0a22-4492-8a8b-31bc3db6d184\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573777 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vgk\" (UniqueName: \"kubernetes.io/projected/7656745a-706d-4652-9db6-e94237d4999c-kube-api-access-m5vgk\") pod \"collect-profiles-29424015-cwvkq\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573798 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4zftz\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573817 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d293af14-4779-4bfc-a8ed-6cfed6974f57-proxy-tls\") pod \"machine-config-controller-84d6567774-6dwf8\" (UID: \"d293af14-4779-4bfc-a8ed-6cfed6974f57\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573837 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f3cebfd-0a22-4492-8a8b-31bc3db6d184-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bbzdz\" (UID: \"4f3cebfd-0a22-4492-8a8b-31bc3db6d184\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573857 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f8920d-7c08-4dbe-a3ca-b716ac949eda-service-ca-bundle\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573878 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjbfd\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-kube-api-access-fjbfd\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573904 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e73e5676-61a1-4154-8943-fd3181ec993c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7qlnk\" (UID: \"e73e5676-61a1-4154-8943-fd3181ec993c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573922 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f7e2c9-4095-4001-8011-294c42e8d198-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7fdc9\" (UID: \"e5f7e2c9-4095-4001-8011-294c42e8d198\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573943 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21edf877-bf36-4cb4-8fcd-43751d4c4a04-apiservice-cert\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573964 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s5fk\" (UniqueName: \"kubernetes.io/projected/d293af14-4779-4bfc-a8ed-6cfed6974f57-kube-api-access-6s5fk\") pod \"machine-config-controller-84d6567774-6dwf8\" (UID: \"d293af14-4779-4bfc-a8ed-6cfed6974f57\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.573984 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-mountpoint-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574003 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n22w2\" (UniqueName: \"kubernetes.io/projected/ecb1dd13-cc68-4571-96ba-b0855381def6-kube-api-access-n22w2\") pod \"service-ca-operator-777779d784-ttpck\" (UID: \"ecb1dd13-cc68-4571-96ba-b0855381def6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574023 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-tls\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574041 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkl42\" (UniqueName: \"kubernetes.io/projected/e5f7e2c9-4095-4001-8011-294c42e8d198-kube-api-access-hkl42\") pod \"kube-storage-version-migrator-operator-b67b599dd-7fdc9\" (UID: \"e5f7e2c9-4095-4001-8011-294c42e8d198\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574060 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bb32aae5-cae9-4de0-9bd8-d86ba8192322-signing-key\") pod \"service-ca-9c57cc56f-cptm9\" (UID: \"bb32aae5-cae9-4de0-9bd8-d86ba8192322\") " pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574077 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfrs\" (UniqueName: \"kubernetes.io/projected/e73e5676-61a1-4154-8943-fd3181ec993c-kube-api-access-5hfrs\") pod \"package-server-manager-789f6589d5-7qlnk\" (UID: \"e73e5676-61a1-4154-8943-fd3181ec993c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574096 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574115 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bb32aae5-cae9-4de0-9bd8-d86ba8192322-signing-cabundle\") pod \"service-ca-9c57cc56f-cptm9\" (UID: \"bb32aae5-cae9-4de0-9bd8-d86ba8192322\") " pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574147 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f7e2c9-4095-4001-8011-294c42e8d198-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7fdc9\" (UID: \"e5f7e2c9-4095-4001-8011-294c42e8d198\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574167 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21edf877-bf36-4cb4-8fcd-43751d4c4a04-webhook-cert\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574184 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tz2h\" (UniqueName: \"kubernetes.io/projected/07389d03-2315-4483-b6bc-c25d2fb69f53-kube-api-access-9tz2h\") pod \"marketplace-operator-79b997595-4zftz\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574204 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d293af14-4779-4bfc-a8ed-6cfed6974f57-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6dwf8\" (UID: \"d293af14-4779-4bfc-a8ed-6cfed6974f57\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574226 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742bc268-8247-4c7f-8cbd-7dcd2e6bd27a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-78d4c\" (UID: \"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574247 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb1dd13-cc68-4571-96ba-b0855381def6-config\") pod \"service-ca-operator-777779d784-ttpck\" (UID: \"ecb1dd13-cc68-4571-96ba-b0855381def6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574265 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pknqg\" (UniqueName: \"kubernetes.io/projected/21edf877-bf36-4cb4-8fcd-43751d4c4a04-kube-api-access-pknqg\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574286 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742bc268-8247-4c7f-8cbd-7dcd2e6bd27a-config\") pod \"kube-apiserver-operator-766d6c64bb-78d4c\" (UID: \"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574304 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-certificates\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574350 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8znr\" (UniqueName: \"kubernetes.io/projected/56f8920d-7c08-4dbe-a3ca-b716ac949eda-kube-api-access-z8znr\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574367 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21edf877-bf36-4cb4-8fcd-43751d4c4a04-tmpfs\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574386 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/56f8920d-7c08-4dbe-a3ca-b716ac949eda-stats-auth\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.574405 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ba830cd-271c-4672-91e5-37a40ce9b87e-config-volume\") pod \"dns-default-wt2b6\" (UID: \"7ba830cd-271c-4672-91e5-37a40ce9b87e\") " pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.575733 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ba830cd-271c-4672-91e5-37a40ce9b87e-config-volume\") pod \"dns-default-wt2b6\" (UID: \"7ba830cd-271c-4672-91e5-37a40ce9b87e\") " pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:23 crc kubenswrapper[4992]: E1211 08:25:23.575839 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.075822231 +0000 UTC m=+148.335296157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.577390 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4zftz\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.577675 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-socket-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.583818 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.584185 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-csi-data-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.584619 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-plugins-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.585175 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-registration-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.585898 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f3cebfd-0a22-4492-8a8b-31bc3db6d184-config\") pod \"kube-controller-manager-operator-78b949d7b-bbzdz\" (UID: \"4f3cebfd-0a22-4492-8a8b-31bc3db6d184\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.585894 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-mountpoint-dir\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.586177 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f7e2c9-4095-4001-8011-294c42e8d198-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7fdc9\" (UID: \"e5f7e2c9-4095-4001-8011-294c42e8d198\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.586363 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f8920d-7c08-4dbe-a3ca-b716ac949eda-service-ca-bundle\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.587848 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-trusted-ca\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.587880 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/56f8920d-7c08-4dbe-a3ca-b716ac949eda-default-certificate\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.588259 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-certificates\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.589106 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecb1dd13-cc68-4571-96ba-b0855381def6-serving-cert\") pod \"service-ca-operator-777779d784-ttpck\" (UID: \"ecb1dd13-cc68-4571-96ba-b0855381def6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.594648 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/bb32aae5-cae9-4de0-9bd8-d86ba8192322-signing-key\") pod \"service-ca-9c57cc56f-cptm9\" (UID: \"bb32aae5-cae9-4de0-9bd8-d86ba8192322\") " pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.594673 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1b199db1-eba8-45d0-961a-4183b059941d-node-bootstrap-token\") pod \"machine-config-server-j7jhb\" (UID: \"1b199db1-eba8-45d0-961a-4183b059941d\") " pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.595270 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4zftz\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.595322 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21edf877-bf36-4cb4-8fcd-43751d4c4a04-tmpfs\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.595360 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1b199db1-eba8-45d0-961a-4183b059941d-certs\") pod \"machine-config-server-j7jhb\" (UID: \"1b199db1-eba8-45d0-961a-4183b059941d\") " pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.595389 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d293af14-4779-4bfc-a8ed-6cfed6974f57-proxy-tls\") pod \"machine-config-controller-84d6567774-6dwf8\" (UID: \"d293af14-4779-4bfc-a8ed-6cfed6974f57\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.597095 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d293af14-4779-4bfc-a8ed-6cfed6974f57-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6dwf8\" (UID: \"d293af14-4779-4bfc-a8ed-6cfed6974f57\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.597221 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/bb32aae5-cae9-4de0-9bd8-d86ba8192322-signing-cabundle\") pod \"service-ca-9c57cc56f-cptm9\" (UID: \"bb32aae5-cae9-4de0-9bd8-d86ba8192322\") " pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.598065 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7ba830cd-271c-4672-91e5-37a40ce9b87e-metrics-tls\") pod \"dns-default-wt2b6\" (UID: \"7ba830cd-271c-4672-91e5-37a40ce9b87e\") " pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.598394 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.599917 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7656745a-706d-4652-9db6-e94237d4999c-config-volume\") pod \"collect-profiles-29424015-cwvkq\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.598763 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f3cebfd-0a22-4492-8a8b-31bc3db6d184-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bbzdz\" (UID: \"4f3cebfd-0a22-4492-8a8b-31bc3db6d184\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.601866 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21edf877-bf36-4cb4-8fcd-43751d4c4a04-webhook-cert\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.601894 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f7e2c9-4095-4001-8011-294c42e8d198-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7fdc9\" (UID: \"e5f7e2c9-4095-4001-8011-294c42e8d198\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.603293 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecb1dd13-cc68-4571-96ba-b0855381def6-config\") pod \"service-ca-operator-777779d784-ttpck\" (UID: \"ecb1dd13-cc68-4571-96ba-b0855381def6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.604402 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f495c66f-c76e-41c7-a70b-71c7a19c8c6a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jmlp\" (UID: \"f495c66f-c76e-41c7-a70b-71c7a19c8c6a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.605041 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e73e5676-61a1-4154-8943-fd3181ec993c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-7qlnk\" (UID: \"e73e5676-61a1-4154-8943-fd3181ec993c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.606426 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/72d71b86-f77b-4c9b-8f3d-3f6938628c28-cert\") pod \"ingress-canary-c8x8k\" (UID: \"72d71b86-f77b-4c9b-8f3d-3f6938628c28\") " pod="openshift-ingress-canary/ingress-canary-c8x8k" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.606467 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7656745a-706d-4652-9db6-e94237d4999c-secret-volume\") pod \"collect-profiles-29424015-cwvkq\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.606622 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21edf877-bf36-4cb4-8fcd-43751d4c4a04-apiservice-cert\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.607325 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/56f8920d-7c08-4dbe-a3ca-b716ac949eda-stats-auth\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.608189 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/56f8920d-7c08-4dbe-a3ca-b716ac949eda-metrics-certs\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.608355 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/742bc268-8247-4c7f-8cbd-7dcd2e6bd27a-config\") pod \"kube-apiserver-operator-766d6c64bb-78d4c\" (UID: \"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.617131 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/742bc268-8247-4c7f-8cbd-7dcd2e6bd27a-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-78d4c\" (UID: \"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.617661 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.618097 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/742bc268-8247-4c7f-8cbd-7dcd2e6bd27a-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-78d4c\" (UID: \"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.622524 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhmrp\" (UniqueName: \"kubernetes.io/projected/1b199db1-eba8-45d0-961a-4183b059941d-kube-api-access-bhmrp\") pod \"machine-config-server-j7jhb\" (UID: \"1b199db1-eba8-45d0-961a-4183b059941d\") " pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.637946 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-tls\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.649008 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr4zb\" (UniqueName: \"kubernetes.io/projected/f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf-kube-api-access-rr4zb\") pod \"csi-hostpathplugin-mw5hz\" (UID: \"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf\") " pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.672407 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f3cebfd-0a22-4492-8a8b-31bc3db6d184-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bbzdz\" (UID: \"4f3cebfd-0a22-4492-8a8b-31bc3db6d184\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.680780 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-j7jhb" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.681183 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: E1211 08:25:23.689340 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.189198558 +0000 UTC m=+148.448672484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.689914 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw5kc\" (UniqueName: \"kubernetes.io/projected/f495c66f-c76e-41c7-a70b-71c7a19c8c6a-kube-api-access-tw5kc\") pod \"control-plane-machine-set-operator-78cbb6b69f-9jmlp\" (UID: \"f495c66f-c76e-41c7-a70b-71c7a19c8c6a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.699134 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-dsnmh"] Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.724692 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.730062 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vgk\" (UniqueName: \"kubernetes.io/projected/7656745a-706d-4652-9db6-e94237d4999c-kube-api-access-m5vgk\") pod \"collect-profiles-29424015-cwvkq\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.737068 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjbfd\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-kube-api-access-fjbfd\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.739713 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-q5n2b"] Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.739774 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w"] Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.747906 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s5fk\" (UniqueName: \"kubernetes.io/projected/d293af14-4779-4bfc-a8ed-6cfed6974f57-kube-api-access-6s5fk\") pod \"machine-config-controller-84d6567774-6dwf8\" (UID: \"d293af14-4779-4bfc-a8ed-6cfed6974f57\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.781894 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:23 crc kubenswrapper[4992]: E1211 08:25:23.782081 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.282033842 +0000 UTC m=+148.541507768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.782299 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: E1211 08:25:23.782692 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.28267335 +0000 UTC m=+148.542147276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:23 crc kubenswrapper[4992]: W1211 08:25:23.792099 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa46ae91_4a07_4663_82c7_ba8c6fac622e.slice/crio-a620a485bcf0419ead05b7820c4514e852909fa0a14ab0235b9d8c1c3979ba9f WatchSource:0}: Error finding container a620a485bcf0419ead05b7820c4514e852909fa0a14ab0235b9d8c1c3979ba9f: Status 404 returned error can't find the container with id a620a485bcf0419ead05b7820c4514e852909fa0a14ab0235b9d8c1c3979ba9f Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.832650 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.885871 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:23 crc kubenswrapper[4992]: E1211 08:25:23.886608 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.386590393 +0000 UTC m=+148.646064319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.887027 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.887672 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.932389 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99"] Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.933031 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.936412 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-bound-sa-token\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.936607 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n22w2\" (UniqueName: \"kubernetes.io/projected/ecb1dd13-cc68-4571-96ba-b0855381def6-kube-api-access-n22w2\") pod \"service-ca-operator-777779d784-ttpck\" (UID: \"ecb1dd13-cc68-4571-96ba-b0855381def6\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.943943 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvmrw\" (UniqueName: \"kubernetes.io/projected/bb32aae5-cae9-4de0-9bd8-d86ba8192322-kube-api-access-rvmrw\") pod \"service-ca-9c57cc56f-cptm9\" (UID: \"bb32aae5-cae9-4de0-9bd8-d86ba8192322\") " pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.946447 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkl42\" (UniqueName: \"kubernetes.io/projected/e5f7e2c9-4095-4001-8011-294c42e8d198-kube-api-access-hkl42\") pod \"kube-storage-version-migrator-operator-b67b599dd-7fdc9\" (UID: \"e5f7e2c9-4095-4001-8011-294c42e8d198\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.949938 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.987450 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:23 crc kubenswrapper[4992]: E1211 08:25:23.988242 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.488211374 +0000 UTC m=+148.747685300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:23 crc kubenswrapper[4992]: I1211 08:25:23.999787 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.000340 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p2zd\" (UniqueName: \"kubernetes.io/projected/756b9d0e-106d-477a-b3a3-5c74ee4b5e54-kube-api-access-5p2zd\") pod \"migrator-59844c95c7-dgvjf\" (UID: \"756b9d0e-106d-477a-b3a3-5c74ee4b5e54\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.025383 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkl7b\" (UniqueName: \"kubernetes.io/projected/7ba830cd-271c-4672-91e5-37a40ce9b87e-kube-api-access-bkl7b\") pod \"dns-default-wt2b6\" (UID: \"7ba830cd-271c-4672-91e5-37a40ce9b87e\") " pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.035143 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.076771 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfrs\" (UniqueName: \"kubernetes.io/projected/e73e5676-61a1-4154-8943-fd3181ec993c-kube-api-access-5hfrs\") pod \"package-server-manager-789f6589d5-7qlnk\" (UID: \"e73e5676-61a1-4154-8943-fd3181ec993c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.076930 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8znr\" (UniqueName: \"kubernetes.io/projected/56f8920d-7c08-4dbe-a3ca-b716ac949eda-kube-api-access-z8znr\") pod \"router-default-5444994796-mhq4b\" (UID: \"56f8920d-7c08-4dbe-a3ca-b716ac949eda\") " pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.077958 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pknqg\" (UniqueName: \"kubernetes.io/projected/21edf877-bf36-4cb4-8fcd-43751d4c4a04-kube-api-access-pknqg\") pod \"packageserver-d55dfcdfc-2vrd2\" (UID: \"21edf877-bf36-4cb4-8fcd-43751d4c4a04\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.079056 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhn4g\" (UniqueName: \"kubernetes.io/projected/72d71b86-f77b-4c9b-8f3d-3f6938628c28-kube-api-access-xhn4g\") pod \"ingress-canary-c8x8k\" (UID: \"72d71b86-f77b-4c9b-8f3d-3f6938628c28\") " pod="openshift-ingress-canary/ingress-canary-c8x8k" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.081797 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.088415 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.089646 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:24 crc kubenswrapper[4992]: E1211 08:25:24.089991 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.589975349 +0000 UTC m=+148.849449275 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.118926 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tz2h\" (UniqueName: \"kubernetes.io/projected/07389d03-2315-4483-b6bc-c25d2fb69f53-kube-api-access-9tz2h\") pod \"marketplace-operator-79b997595-4zftz\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.162361 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.177957 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.202621 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c8x8k" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.204008 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:24 crc kubenswrapper[4992]: E1211 08:25:24.204342 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.704325922 +0000 UTC m=+148.963799848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.286804 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.287130 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.287190 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.305263 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:24 crc kubenswrapper[4992]: E1211 08:25:24.305541 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.805525081 +0000 UTC m=+149.064999007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.394824 4992 generic.go:334] "Generic (PLEG): container finished" podID="c781345d-eebb-4b54-98f5-2a51cdb942a0" containerID="24a5a755f749eb0bf824a5f41d9bd40698e6256a0a28053c37ea20c73c733cfc" exitCode=0 Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.394921 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" event={"ID":"c781345d-eebb-4b54-98f5-2a51cdb942a0","Type":"ContainerDied","Data":"24a5a755f749eb0bf824a5f41d9bd40698e6256a0a28053c37ea20c73c733cfc"} Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.397399 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" event={"ID":"e5bba427-d8bb-4c0f-a609-4c9c556056e0","Type":"ContainerStarted","Data":"ac66342fa65041de39719310e7dc9d7c72bc873cc8ae328a2c63d0717babea21"} Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.452047 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" event={"ID":"09e0ade2-2381-44e7-ad73-4bda0f48231c","Type":"ContainerStarted","Data":"5d575eb4e5f37cb791920f829c55f7e5578826623cc25cec62e1b72a7f3c1ae6"} Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.454163 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:24 crc kubenswrapper[4992]: E1211 08:25:24.455878 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:24.955859782 +0000 UTC m=+149.215333708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.484617 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" event={"ID":"aa46ae91-4a07-4663-82c7-ba8c6fac622e","Type":"ContainerStarted","Data":"a620a485bcf0419ead05b7820c4514e852909fa0a14ab0235b9d8c1c3979ba9f"} Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.495586 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-j7jhb" event={"ID":"1b199db1-eba8-45d0-961a-4183b059941d","Type":"ContainerStarted","Data":"575f805ef438066701173e885060f71cd3f985486c52832b689865ab0d5dfd77"} Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.500576 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" event={"ID":"306b229e-5b0e-4c77-83ce-f95f1176dc2b","Type":"ContainerStarted","Data":"ab1e85b2dfbb0a45f510fb0bda506e998188e33b0811add921402c4c3e19ec7d"} Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.501772 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.504977 4992 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wjwg6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.505017 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" podUID="306b229e-5b0e-4c77-83ce-f95f1176dc2b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.555805 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:24 crc kubenswrapper[4992]: E1211 08:25:24.556816 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:25.056796465 +0000 UTC m=+149.316270391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.658404 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:24 crc kubenswrapper[4992]: E1211 08:25:24.658941 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:25.15892617 +0000 UTC m=+149.418400096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.766484 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:24 crc kubenswrapper[4992]: E1211 08:25:24.766970 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:25.266954143 +0000 UTC m=+149.526428069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:24 crc kubenswrapper[4992]: I1211 08:25:24.869291 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:24 crc kubenswrapper[4992]: E1211 08:25:24.869750 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:25.369726904 +0000 UTC m=+149.629200820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:24.980661 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:25 crc kubenswrapper[4992]: E1211 08:25:24.981295 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:25.481270813 +0000 UTC m=+149.740744739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.106883 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:25 crc kubenswrapper[4992]: E1211 08:25:25.107449 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:25.607423853 +0000 UTC m=+149.866897779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.227935 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:25 crc kubenswrapper[4992]: E1211 08:25:25.228518 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:25.728489356 +0000 UTC m=+149.987963282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.331511 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:25 crc kubenswrapper[4992]: E1211 08:25:25.331964 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:25.831950107 +0000 UTC m=+150.091424033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.451022 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:25 crc kubenswrapper[4992]: E1211 08:25:25.451554 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:25.951532541 +0000 UTC m=+150.211006467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.581704 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:25 crc kubenswrapper[4992]: E1211 08:25:25.582082 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:26.082070509 +0000 UTC m=+150.341544435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.732201 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:25 crc kubenswrapper[4992]: E1211 08:25:25.732998 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:26.232968314 +0000 UTC m=+150.492442240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.736083 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" event={"ID":"09e0ade2-2381-44e7-ad73-4bda0f48231c","Type":"ContainerStarted","Data":"6d4e1145166d8370697662f77a8f9eb24be09537649b79e8149760d4fcb6db59"} Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.737114 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mhq4b" event={"ID":"56f8920d-7c08-4dbe-a3ca-b716ac949eda","Type":"ContainerStarted","Data":"e5a0cb4f2264865d0cef42bc34a321b6c257629ffaa9d411eefc5c7a39f00ac7"} Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.738370 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" event={"ID":"306b229e-5b0e-4c77-83ce-f95f1176dc2b","Type":"ContainerStarted","Data":"e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8"} Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.741240 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-q5n2b" event={"ID":"48288264-5766-4d38-956b-68434cf4c955","Type":"ContainerStarted","Data":"e7a110229b61eb543f6d7b447faa3bc66827027d244d1618e3b9c699eebc8962"} Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.742377 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" event={"ID":"4c2d8724-2ac5-4542-878b-e2c9e33e8718","Type":"ContainerStarted","Data":"c465a6ef26ae024cf3011ace0a4491092b58bea272b49c293359b90c78e85f74"} Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.840592 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:25 crc kubenswrapper[4992]: E1211 08:25:25.844712 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:26.344690007 +0000 UTC m=+150.604163943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.870133 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" podStartSLOduration=130.87010481 podStartE2EDuration="2m10.87010481s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:25.839431645 +0000 UTC m=+150.098905581" watchObservedRunningTime="2025-12-11 08:25:25.87010481 +0000 UTC m=+150.129578736" Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.871513 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-v8zp9" podStartSLOduration=131.871505148 podStartE2EDuration="2m11.871505148s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:25.870788268 +0000 UTC m=+150.130262204" watchObservedRunningTime="2025-12-11 08:25:25.871505148 +0000 UTC m=+150.130979074" Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.918857 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-zphhl" podStartSLOduration=130.918827869 podStartE2EDuration="2m10.918827869s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:25.916140287 +0000 UTC m=+150.175614223" watchObservedRunningTime="2025-12-11 08:25:25.918827869 +0000 UTC m=+150.178301795" Dec 11 08:25:25 crc kubenswrapper[4992]: I1211 08:25:25.942588 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:25 crc kubenswrapper[4992]: E1211 08:25:25.943127 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:26.443102582 +0000 UTC m=+150.702576508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.044180 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.045069 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:26.545048372 +0000 UTC m=+150.804522298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.160048 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.160757 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:26.66071482 +0000 UTC m=+150.920188756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.261768 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.262824 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:26.762805293 +0000 UTC m=+151.022279219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.365154 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.365742 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:26.865627027 +0000 UTC m=+151.125100963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.444422 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.467402 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.468020 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:26.968005249 +0000 UTC m=+151.227479175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.571056 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.571504 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.071465709 +0000 UTC m=+151.330939635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.672911 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.673265 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.173253314 +0000 UTC m=+151.432727240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.763754 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" event={"ID":"4c2d8724-2ac5-4542-878b-e2c9e33e8718","Type":"ContainerStarted","Data":"511668c73429194500923540d9f062da80ec7e1c17b72908789d3e75771950e3"} Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.775196 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" event={"ID":"4fc4ca1b-b60d-484e-807e-034d452122f2","Type":"ContainerStarted","Data":"219a4058802402900de43e848077d94bd8eb1abf0cdcbe1cd06815dee652825d"} Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.775253 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" event={"ID":"4fc4ca1b-b60d-484e-807e-034d452122f2","Type":"ContainerStarted","Data":"f14842fafdb583197e8d474cec3b258f0e43521f43bd8f211ea828cb3748b417"} Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.787072 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.787540 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.287520305 +0000 UTC m=+151.546994231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.787689 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.788772 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.288748589 +0000 UTC m=+151.548222505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.792925 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" event={"ID":"aa46ae91-4a07-4663-82c7-ba8c6fac622e","Type":"ContainerStarted","Data":"4c79178a3b74002026817a2eca18834237c3ffbbc852886e1dc06dca7da58940"} Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.795349 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zmd99" podStartSLOduration=132.795317664 podStartE2EDuration="2m12.795317664s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:26.791757869 +0000 UTC m=+151.051231815" watchObservedRunningTime="2025-12-11 08:25:26.795317664 +0000 UTC m=+151.054791580" Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.798848 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mhq4b" event={"ID":"56f8920d-7c08-4dbe-a3ca-b716ac949eda","Type":"ContainerStarted","Data":"43ae44411521b557d957749eae7247c02c35c0dfca29462ce574a44d689e7329"} Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.800594 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-j7jhb" event={"ID":"1b199db1-eba8-45d0-961a-4183b059941d","Type":"ContainerStarted","Data":"c4de8746b607a83aedfed2079230e62bb288b79b1d0364a5eb1c41c7f6bc66d8"} Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.826967 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-q5n2b" event={"ID":"48288264-5766-4d38-956b-68434cf4c955","Type":"ContainerStarted","Data":"a3a63a6830340c104db4fa11dbc433b207dc3074e3b447ec89436b2a88240cec"} Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.828362 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.842463 4992 patch_prober.go:28] interesting pod/console-operator-58897d9998-q5n2b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.842566 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-q5n2b" podUID="48288264-5766-4d38-956b-68434cf4c955" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.848879 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" event={"ID":"c781345d-eebb-4b54-98f5-2a51cdb942a0","Type":"ContainerStarted","Data":"9ecf224925e4e2c498d53d876cf168457aa88ac8a59c76b4fb712c1d35254ce1"} Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.868621 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-j7jhb" podStartSLOduration=6.868584284 podStartE2EDuration="6.868584284s" podCreationTimestamp="2025-12-11 08:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:26.852750728 +0000 UTC m=+151.112224674" watchObservedRunningTime="2025-12-11 08:25:26.868584284 +0000 UTC m=+151.128058200" Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.869126 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" podStartSLOduration=132.869117298 podStartE2EDuration="2m12.869117298s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:26.826467142 +0000 UTC m=+151.085941068" watchObservedRunningTime="2025-12-11 08:25:26.869117298 +0000 UTC m=+151.128591224" Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.881355 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mhq4b" podStartSLOduration=131.881320496 podStartE2EDuration="2m11.881320496s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:26.875079718 +0000 UTC m=+151.134553654" watchObservedRunningTime="2025-12-11 08:25:26.881320496 +0000 UTC m=+151.140794422" Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.889012 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.889283 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.389227759 +0000 UTC m=+151.648701685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.889629 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:26 crc kubenswrapper[4992]: E1211 08:25:26.890270 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.390252356 +0000 UTC m=+151.649726282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.910237 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz"] Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.913712 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gwnxp"] Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.920525 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" podStartSLOduration=131.920497369 podStartE2EDuration="2m11.920497369s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:26.918553786 +0000 UTC m=+151.178027712" watchObservedRunningTime="2025-12-11 08:25:26.920497369 +0000 UTC m=+151.179971295" Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.938274 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-c8tgl"] Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.942936 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9lslc"] Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.951436 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-q5n2b" podStartSLOduration=132.95141633 podStartE2EDuration="2m12.95141633s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:26.950150116 +0000 UTC m=+151.209624042" watchObservedRunningTime="2025-12-11 08:25:26.95141633 +0000 UTC m=+151.210890256" Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.953375 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr"] Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.967565 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92"] Dec 11 08:25:26 crc kubenswrapper[4992]: I1211 08:25:26.982850 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.011900 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.013258 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.513227931 +0000 UTC m=+151.772701857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.015853 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-46cbx"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.024371 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.024451 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.058333 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-h6k7l"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.067332 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.091566 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.095259 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:27 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:27 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:27 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.095449 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.098614 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.112954 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.114011 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.167609 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.667584799 +0000 UTC m=+151.927058725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.175390 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.217754 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-g5d6r"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.239145 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.268754 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.269134 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.769104658 +0000 UTC m=+152.028578584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.269183 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k7jwn"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.279210 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cptm9"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.283002 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wt2b6"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.320888 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.349431 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.370068 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:27 crc kubenswrapper[4992]: W1211 08:25:27.370290 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ba830cd_271c_4672_91e5_37a40ce9b87e.slice/crio-cf561b9a36b9d64b3284351737f329a9e1998272dd410b253a34d91bec741399 WatchSource:0}: Error finding container cf561b9a36b9d64b3284351737f329a9e1998272dd410b253a34d91bec741399: Status 404 returned error can't find the container with id cf561b9a36b9d64b3284351737f329a9e1998272dd410b253a34d91bec741399 Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.370597 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.870577424 +0000 UTC m=+152.130051350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.385113 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.443939 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c8x8k"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.471414 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.471920 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.971886888 +0000 UTC m=+152.231360814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.472381 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.472797 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:27.972785301 +0000 UTC m=+152.232259287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.562450 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9qd6n"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.574244 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.574300 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.074273759 +0000 UTC m=+152.333747685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.574555 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.574914 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.074897506 +0000 UTC m=+152.334371432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.615424 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4zftz"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.642595 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.667253 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.677284 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.677542 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.177526534 +0000 UTC m=+152.437000460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.680209 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ttpck"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.737125 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mw5hz"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.741388 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz"] Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.778456 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.778812 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.278798556 +0000 UTC m=+152.538272482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.875781 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h6k7l" event={"ID":"973372a1-5f38-40b5-8837-bd2236baf511","Type":"ContainerStarted","Data":"cab51be83d5aeb2f4a01bd44900aa8b06b4c95e5af2784366d17eb8f25d49ac8"} Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.879412 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.879911 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.379890782 +0000 UTC m=+152.639364708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.882205 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" event={"ID":"ecb1dd13-cc68-4571-96ba-b0855381def6","Type":"ContainerStarted","Data":"862b6586bbeb8d79c35ecc15af313edbce18b8ce371da6e356e688e48d9f8e83"} Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.903976 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lslc" event={"ID":"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b","Type":"ContainerStarted","Data":"0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf"} Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.904021 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lslc" event={"ID":"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b","Type":"ContainerStarted","Data":"5ba224a85f804334e99daf5ac98f1da15a156793a10f3fa62dfd9bd5a6e37668"} Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.954929 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.964353 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.965917 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" event={"ID":"ee5c594f-6372-4e85-91f1-363525dd5abe","Type":"ContainerStarted","Data":"bbdb8f56f26365b6c88493c9e7c56afccf33d7a8c1c2849b55421a680eb25f41"} Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.967545 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.992820 4992 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7mgxz container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.993033 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" podUID="ee5c594f-6372-4e85-91f1-363525dd5abe" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 11 08:25:27 crc kubenswrapper[4992]: I1211 08:25:27.996346 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:27 crc kubenswrapper[4992]: E1211 08:25:27.998049 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.498035327 +0000 UTC m=+152.757509253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.003270 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" event={"ID":"901e3706-7b64-427f-a052-b4fc84b8304e","Type":"ContainerStarted","Data":"cb225d11eba64645b1a7c3e949707c701cd383a5981ab29dd6fc77203d888b57"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.013098 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" event={"ID":"f495c66f-c76e-41c7-a70b-71c7a19c8c6a","Type":"ContainerStarted","Data":"b7058ed6cdd536c4ea769f0135f18cca6da9be031ce6a104850044dd469d80cc"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.035917 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" event={"ID":"399808fe-587e-43a8-8d03-5e8cde47c717","Type":"ContainerStarted","Data":"5d2605ceb9e4b09b026c391dc1c5263c69e3b50bd9612e00f8cf750494e4731f"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.051549 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9lslc" podStartSLOduration=134.051532876 podStartE2EDuration="2m14.051532876s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:27.961101025 +0000 UTC m=+152.220574961" watchObservedRunningTime="2025-12-11 08:25:28.051532876 +0000 UTC m=+152.311006802" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.052090 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" podStartSLOduration=133.05208463 podStartE2EDuration="2m13.05208463s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:28.04650302 +0000 UTC m=+152.305976946" watchObservedRunningTime="2025-12-11 08:25:28.05208463 +0000 UTC m=+152.311558556" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.052871 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" event={"ID":"eea4eb1b-17a9-468f-981b-b26d90c75221","Type":"ContainerStarted","Data":"07d51572432070c652dab6491bda0b362a2cfc38f7edeccd9a521e6f347c4438"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.065323 4992 generic.go:334] "Generic (PLEG): container finished" podID="06c6dbb7-21d9-44dd-bf69-752289a02ca4" containerID="7a60e29961189fbe7e68c56e276724b7973edce73605d6f613b4c7bf3d0cd0d4" exitCode=0 Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.065426 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" event={"ID":"06c6dbb7-21d9-44dd-bf69-752289a02ca4","Type":"ContainerDied","Data":"7a60e29961189fbe7e68c56e276724b7973edce73605d6f613b4c7bf3d0cd0d4"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.065464 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" event={"ID":"06c6dbb7-21d9-44dd-bf69-752289a02ca4","Type":"ContainerStarted","Data":"532999b93b25753b386a19bf86e78dedd107e42b66ebf6c68b318d58ea1f7638"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.070577 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"2f35c6b37ef7694e328f0d0f5f98f965447a9632691777d5497c645315ffd2d7"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.072940 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" podStartSLOduration=133.07291852 podStartE2EDuration="2m13.07291852s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:28.072545001 +0000 UTC m=+152.332018927" watchObservedRunningTime="2025-12-11 08:25:28.07291852 +0000 UTC m=+152.332392446" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.074960 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" event={"ID":"83b64ec0-5648-49b2-9e7e-32834c30e7a9","Type":"ContainerStarted","Data":"56f05104a51b5f0950772e17050568c10939da046adbf20df61a3e0413f0fe27"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.075232 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.082829 4992 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-46cbx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.082892 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" podUID="83b64ec0-5648-49b2-9e7e-32834c30e7a9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.098786 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.100824 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.600792869 +0000 UTC m=+152.860266795 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.131643 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:28 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:28 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:28 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.132216 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.174292 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" podStartSLOduration=134.174260073 podStartE2EDuration="2m14.174260073s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:28.174016727 +0000 UTC m=+152.433490673" watchObservedRunningTime="2025-12-11 08:25:28.174260073 +0000 UTC m=+152.433733999" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.182673 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.182712 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" event={"ID":"aa46ae91-4a07-4663-82c7-ba8c6fac622e","Type":"ContainerStarted","Data":"bdfa7ea349479a4b41d99ea0ce3973b1a00f928e452062ac055b1ca843c1f6cd"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.182733 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" event={"ID":"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16","Type":"ContainerStarted","Data":"b680c8f5ac74d1b47335219cb52cd80f33b6661435258fe4189180019d6dc0f8"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.211559 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.212076 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.712061459 +0000 UTC m=+152.971535385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.232077 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-dsnmh" podStartSLOduration=133.232058497 podStartE2EDuration="2m13.232058497s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:28.217947698 +0000 UTC m=+152.477421634" watchObservedRunningTime="2025-12-11 08:25:28.232058497 +0000 UTC m=+152.491532423" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.303591 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" event={"ID":"2014ff72-c050-49a3-9186-49e1830a27be","Type":"ContainerStarted","Data":"4dfa86056c595fd2d4e163cb4729cafd26a38a191f5872a7c00a67a242f34fc8"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.305686 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.313766 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.315287 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.815258993 +0000 UTC m=+153.074732929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.339026 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" event={"ID":"07389d03-2315-4483-b6bc-c25d2fb69f53","Type":"ContainerStarted","Data":"5a6f6aae5d4864bb54be8c5ede11455a0bd90f4de2f6e7fa05912d771a0d815b"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.339208 4992 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cj6jr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.339260 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" podUID="2014ff72-c050-49a3-9186-49e1830a27be" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.348197 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c8x8k" event={"ID":"72d71b86-f77b-4c9b-8f3d-3f6938628c28","Type":"ContainerStarted","Data":"5418c045384b3be5b6591ca674bdbd95e08d3466a008943b15913d04a8a199be"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.357117 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" podStartSLOduration=133.357080137 podStartE2EDuration="2m13.357080137s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:28.347951601 +0000 UTC m=+152.607425547" watchObservedRunningTime="2025-12-11 08:25:28.357080137 +0000 UTC m=+152.616554063" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.357862 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" event={"ID":"7656745a-706d-4652-9db6-e94237d4999c","Type":"ContainerStarted","Data":"5d3bdf97a3e617496a1ac79fc0233eceb9dfe55636280ffec13c1dc71ac1ac4a"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.393739 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"88045638d5be22a6aac633ca6e52ce418da0d0dde15884a85683784086406cf9"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.399222 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf" event={"ID":"756b9d0e-106d-477a-b3a3-5c74ee4b5e54","Type":"ContainerStarted","Data":"1f1546ba8cf19416b85970dfa025a52be946e16a97f8cdc60a32b70e91b96e12"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.401529 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" event={"ID":"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32","Type":"ContainerStarted","Data":"3fb37d24cc8b4dc1f90a33ef7c19a6e88c4ebef33e78af618a7e0bbeed750a56"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.416388 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.416739 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" podStartSLOduration=134.416719259 podStartE2EDuration="2m14.416719259s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:28.415395034 +0000 UTC m=+152.674868970" watchObservedRunningTime="2025-12-11 08:25:28.416719259 +0000 UTC m=+152.676193185" Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.417138 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:28.917125511 +0000 UTC m=+153.176599437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.433621 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" event={"ID":"d293af14-4779-4bfc-a8ed-6cfed6974f57","Type":"ContainerStarted","Data":"810c0bf4eb95b02dc83794b60f5e2b69e3ee57d7333105cb18f6f60eea61e0d9"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.446515 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" event={"ID":"fe3eedb0-f613-4104-ba42-a22301757402","Type":"ContainerStarted","Data":"463ec31f2bfc1bd1e0addcd6054836790cdfba8b5b774a940ea267944e549af6"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.448353 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.468813 4992 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gwnxp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.468881 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" podUID="fe3eedb0-f613-4104-ba42-a22301757402" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.474499 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" event={"ID":"e5f7e2c9-4095-4001-8011-294c42e8d198","Type":"ContainerStarted","Data":"13053d9b964e9fe8108399a82c7e7ecc4dbed67ab87efe326549609f75555a3c"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.481922 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" podStartSLOduration=133.481899281 podStartE2EDuration="2m13.481899281s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:28.48146847 +0000 UTC m=+152.740942406" watchObservedRunningTime="2025-12-11 08:25:28.481899281 +0000 UTC m=+152.741373207" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.490287 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" event={"ID":"e73e5676-61a1-4154-8943-fd3181ec993c","Type":"ContainerStarted","Data":"4f25ff865aab30cb8fcb4f3f13ecc66d3f081fba21489dd38429c5eec4649275"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.496829 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" event={"ID":"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a","Type":"ContainerStarted","Data":"755af9c08158241db783a419d43a276ce1cfe9a08e1dc81ce7c3e8e77f3254c1"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.497802 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"df31d1567091012025b6b1ffc4a3ef00b4898898e9fdad6d3c9d9e2fdf9e254c"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.500563 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wt2b6" event={"ID":"7ba830cd-271c-4672-91e5-37a40ce9b87e","Type":"ContainerStarted","Data":"cf561b9a36b9d64b3284351737f329a9e1998272dd410b253a34d91bec741399"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.518694 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.518954 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.018910485 +0000 UTC m=+153.278384411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.519317 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.520111 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.020088577 +0000 UTC m=+153.279562503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.567122 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-n4w6w" event={"ID":"4fc4ca1b-b60d-484e-807e-034d452122f2","Type":"ContainerStarted","Data":"69ecddb6529ea732bb00740f9cf786179c2c8b8b7d43dc74c052498c225a7d97"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.595463 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" event={"ID":"448a032a-3b5d-494b-b7e4-55298af61b9e","Type":"ContainerStarted","Data":"e1384e16ca1af7fb5e853871e5baaa8d84f09522018beadf6b4b6b7235d7405c"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.615651 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" event={"ID":"7f77b180-f28c-472b-a577-44ef5012100c","Type":"ContainerStarted","Data":"db5fb696a5024cd4232ba7d93bae147b8a8554b9c0582d9a797eaa0fb6c699c3"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.626048 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.626417 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.126400404 +0000 UTC m=+153.385874330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.628669 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" event={"ID":"8d4cb9cf-5e0f-46a9-adce-c54aaef43120","Type":"ContainerStarted","Data":"bf2d8c7394026af85ae918621c0f51da547770f63926cabcd19ca3872cbd5f48"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.636012 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" podStartSLOduration=133.635989752 podStartE2EDuration="2m13.635989752s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:28.627487084 +0000 UTC m=+152.886961000" watchObservedRunningTime="2025-12-11 08:25:28.635989752 +0000 UTC m=+152.895463678" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.642358 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" event={"ID":"bb32aae5-cae9-4de0-9bd8-d86ba8192322","Type":"ContainerStarted","Data":"def9e7461ca3833240fecd28b6a6ffb21ba5aba12ea224448ea1399cd76578ff"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.656874 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" event={"ID":"21edf877-bf36-4cb4-8fcd-43751d4c4a04","Type":"ContainerStarted","Data":"30dd7462b8d626b2e7b892aa5b7f3048ac57abff81f5fb65b4ea4a6a98073f04"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.663113 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" event={"ID":"4f3cebfd-0a22-4492-8a8b-31bc3db6d184","Type":"ContainerStarted","Data":"f9880a87dbe87b517b5a243994b1022451b9355be4e38d73e769a00fdb198b0d"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.683311 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" event={"ID":"b818ac68-9e90-4212-b658-8a946fff5cfc","Type":"ContainerStarted","Data":"2f515f5eb9c0b95454098563d75316635d9a61e258ca566fec166d125af8cb10"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.683364 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" event={"ID":"b818ac68-9e90-4212-b658-8a946fff5cfc","Type":"ContainerStarted","Data":"f3645b2403d11691474303042ad050b81c352484c5c3cbc6e3e7773ee25bc3fa"} Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.696477 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" podStartSLOduration=133.696451107 podStartE2EDuration="2m13.696451107s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:28.69617542 +0000 UTC m=+152.955649366" watchObservedRunningTime="2025-12-11 08:25:28.696451107 +0000 UTC m=+152.955925033" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.718108 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9xqnr" Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.729276 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.733483 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.233469562 +0000 UTC m=+153.492943488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.834108 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.834743 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.334720723 +0000 UTC m=+153.594194649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.936157 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:28 crc kubenswrapper[4992]: E1211 08:25:28.936820 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.436796006 +0000 UTC m=+153.696269932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:28 crc kubenswrapper[4992]: I1211 08:25:28.972226 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-q5n2b" Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.040873 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.041291 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.541263215 +0000 UTC m=+153.800737151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.098075 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:29 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:29 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:29 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.098151 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.143550 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.152007 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.65198034 +0000 UTC m=+153.911454266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.259905 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.260550 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.760521657 +0000 UTC m=+154.019995593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.362472 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.362848 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.862834876 +0000 UTC m=+154.122308802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.467240 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.467605 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:29.967586882 +0000 UTC m=+154.227060808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.570039 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.571056 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:30.071036532 +0000 UTC m=+154.330510458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.672207 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.672432 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:30.172377855 +0000 UTC m=+154.431851781 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.672603 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.673134 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:30.173117495 +0000 UTC m=+154.432591411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.781565 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.781954 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:30.28193026 +0000 UTC m=+154.541404186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.805984 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c1b4f4a70d6a3badb41e379d4def2f6804071ae135026f557a8ac53026212729"} Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.839967 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cptm9" event={"ID":"bb32aae5-cae9-4de0-9bd8-d86ba8192322","Type":"ContainerStarted","Data":"679882f9a4c42f5f6e63da91f86b74d3951b5f8dcf4acefd1ed54592b17fbb91"} Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.841493 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g5jg9" event={"ID":"901e3706-7b64-427f-a052-b4fc84b8304e","Type":"ContainerStarted","Data":"f9a7ad52e65b1d2e75896a679aa7fb8263e5cdc730339fbf778e56863dc928c2"} Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.844022 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" event={"ID":"7f77b180-f28c-472b-a577-44ef5012100c","Type":"ContainerStarted","Data":"e6b18d13b1eb901a7b9bb9d6cb0007123246a2c00daa7c30df5186d19b602d3e"} Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.845106 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h6k7l" event={"ID":"973372a1-5f38-40b5-8837-bd2236baf511","Type":"ContainerStarted","Data":"c5bab95ce628c4b39387d1f1ecc9a1b35ce9a795fbc606d9fe3e3226dd11d6dd"} Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.847254 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-h6k7l" Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.848410 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.848466 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.879534 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" event={"ID":"b818ac68-9e90-4212-b658-8a946fff5cfc","Type":"ContainerStarted","Data":"1ea1905024d6140fd304bc66f25c9eea5c8fe8a5ca24373253ff0c4bff1b9981"} Dec 11 08:25:29 crc kubenswrapper[4992]: E1211 08:25:29.886136 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:30.38611573 +0000 UTC m=+154.645589656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:29 crc kubenswrapper[4992]: I1211 08:25:29.910906 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.003010 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf" event={"ID":"756b9d0e-106d-477a-b3a3-5c74ee4b5e54","Type":"ContainerStarted","Data":"e94a59837c8f609c77b8dec96cb5c82ff51589224c4c6b79ef9e9454fa3e433a"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.014583 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:30 crc kubenswrapper[4992]: E1211 08:25:30.015503 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:30.515477076 +0000 UTC m=+154.774951002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.016156 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" event={"ID":"399808fe-587e-43a8-8d03-5e8cde47c717","Type":"ContainerStarted","Data":"6c0a67223cf88ec01a2442fd37a14dcfc0a7d8e4dcaaa649da8c9920ab2ae995"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.038508 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-h6k7l" podStartSLOduration=136.038490345 podStartE2EDuration="2m16.038490345s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:29.978305377 +0000 UTC m=+154.237779303" watchObservedRunningTime="2025-12-11 08:25:30.038490345 +0000 UTC m=+154.297964271" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.040800 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mnx8r" podStartSLOduration=135.040792756 podStartE2EDuration="2m15.040792756s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:30.037665632 +0000 UTC m=+154.297139558" watchObservedRunningTime="2025-12-11 08:25:30.040792756 +0000 UTC m=+154.300266682" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.064079 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" event={"ID":"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16","Type":"ContainerStarted","Data":"b9bd14540ea1941ee4f712ffbd2f5dd61d37c1db5b75feb7cc1ac0889d52b08e"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.106336 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:30 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:30 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:30 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.106432 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.122226 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.138237 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" podStartSLOduration=136.138213415 podStartE2EDuration="2m16.138213415s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:30.091676814 +0000 UTC m=+154.351150740" watchObservedRunningTime="2025-12-11 08:25:30.138213415 +0000 UTC m=+154.397687341" Dec 11 08:25:30 crc kubenswrapper[4992]: E1211 08:25:30.152984 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:30.652938381 +0000 UTC m=+154.912412307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.156832 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" event={"ID":"fe3eedb0-f613-4104-ba42-a22301757402","Type":"ContainerStarted","Data":"1dd7d7e2e11e1d6fd34b46af9990b5f248d02cb2192f3b80c16457cf54f33a9e"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.156890 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" event={"ID":"83b64ec0-5648-49b2-9e7e-32834c30e7a9","Type":"ContainerStarted","Data":"9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.191351 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" event={"ID":"8d4cb9cf-5e0f-46a9-adce-c54aaef43120","Type":"ContainerStarted","Data":"92233227dc42a2bb2577cd15ffc5db73f23f04d52a56fcbe1774dcfadcb36b6d"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.213126 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fp67l" podStartSLOduration=135.213110647 podStartE2EDuration="2m15.213110647s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:30.154243945 +0000 UTC m=+154.413717881" watchObservedRunningTime="2025-12-11 08:25:30.213110647 +0000 UTC m=+154.472584573" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.214144 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" podStartSLOduration=135.214140425 podStartE2EDuration="2m15.214140425s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:30.212200813 +0000 UTC m=+154.471674739" watchObservedRunningTime="2025-12-11 08:25:30.214140425 +0000 UTC m=+154.473614351" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.218139 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.242485 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wt2b6" event={"ID":"7ba830cd-271c-4672-91e5-37a40ce9b87e","Type":"ContainerStarted","Data":"824fafb656163d6368f8e1448d8523bf4099443fd8d72d28502de6ec937cbac4"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.250218 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:30 crc kubenswrapper[4992]: E1211 08:25:30.251944 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:30.75192376 +0000 UTC m=+155.011397686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.271745 4992 generic.go:334] "Generic (PLEG): container finished" podID="eea4eb1b-17a9-468f-981b-b26d90c75221" containerID="aa61c65f569a56f84231ea7d30fc408f89bd907b0e80b3089ba5cf401854cbd8" exitCode=0 Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.271930 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" event={"ID":"eea4eb1b-17a9-468f-981b-b26d90c75221","Type":"ContainerDied","Data":"aa61c65f569a56f84231ea7d30fc408f89bd907b0e80b3089ba5cf401854cbd8"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.298062 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" event={"ID":"ee5c594f-6372-4e85-91f1-363525dd5abe","Type":"ContainerStarted","Data":"f0aa995f301c0edd4646bb92fc21507fb87bfcc63dd836226f3155def4028034"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.343934 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c8x8k" event={"ID":"72d71b86-f77b-4c9b-8f3d-3f6938628c28","Type":"ContainerStarted","Data":"1d8678510589f0f9ed7cae90be5b870cd09db6f5f956797248457e97a6f24f1e"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.372199 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7mgxz" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.405056 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:30 crc kubenswrapper[4992]: E1211 08:25:30.405521 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:30.905504968 +0000 UTC m=+155.164978894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.451157 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" event={"ID":"7656745a-706d-4652-9db6-e94237d4999c","Type":"ContainerStarted","Data":"e87c6cb69fa890883f967f56ad79b6d456b8721a4dd9a7fb5b3ef71cf0a40010"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.516748 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" event={"ID":"742bc268-8247-4c7f-8cbd-7dcd2e6bd27a","Type":"ContainerStarted","Data":"bf9eae8dc0f568d90391b77e26dcd66e4f1f0d6a1aadde588eb53c2720670e98"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.517677 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:30 crc kubenswrapper[4992]: E1211 08:25:30.517970 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.017924719 +0000 UTC m=+155.277398645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.583736 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" event={"ID":"e5f7e2c9-4095-4001-8011-294c42e8d198","Type":"ContainerStarted","Data":"493e777c65dc9d07c4d1f505d843ffb731dad7ab19e5f02f9e013cf0d550b210"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.590968 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6b882cb9b740b82af9f64bcb19fd4334d1c9280b53df148a744a72269e66671e"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.591629 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.606143 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mjjwq" event={"ID":"448a032a-3b5d-494b-b7e4-55298af61b9e","Type":"ContainerStarted","Data":"7cefbe1d5d5cac20a83dc929e1f5b0f646c2487b873f0110cd931116384f2f52"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.622098 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:30 crc kubenswrapper[4992]: E1211 08:25:30.624277 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.124264087 +0000 UTC m=+155.383738013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.626482 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dkhhd"] Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.641733 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.648331 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.650722 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" event={"ID":"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf","Type":"ContainerStarted","Data":"c4f59b94393bfeea0fccc3b6e9b7abaff905abe87dbae122c8d1b2417decbc31"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.668984 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dkhhd"] Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.678539 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a0adc8436f7fd986b75a907814e32b388505f0f9d143db9da1e07cdaa97da3f7"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.707446 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-c8x8k" podStartSLOduration=10.707429193 podStartE2EDuration="10.707429193s" podCreationTimestamp="2025-12-11 08:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:30.706760614 +0000 UTC m=+154.966234540" watchObservedRunningTime="2025-12-11 08:25:30.707429193 +0000 UTC m=+154.966903119" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.707997 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" event={"ID":"d293af14-4779-4bfc-a8ed-6cfed6974f57","Type":"ContainerStarted","Data":"3cdac6a8ac1077d0365b6627b1fce4e8cff4350e7888c6639ce5e3680caea5a0"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.725299 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.725697 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-catalog-content\") pod \"community-operators-dkhhd\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.725823 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6gqn\" (UniqueName: \"kubernetes.io/projected/4de0775a-dd54-436c-a5ff-fd6782a559a8-kube-api-access-h6gqn\") pod \"community-operators-dkhhd\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.725867 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-utilities\") pod \"community-operators-dkhhd\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: E1211 08:25:30.726929 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.226909016 +0000 UTC m=+155.486382942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.728519 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" event={"ID":"2014ff72-c050-49a3-9186-49e1830a27be","Type":"ContainerStarted","Data":"c73ecab421a09fc52fa84f76cdfd37ccfda91a80af3cf06352d72bc9e591abf7"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.761941 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7fdc9" podStartSLOduration=135.761920406 podStartE2EDuration="2m15.761920406s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:30.760987952 +0000 UTC m=+155.020461908" watchObservedRunningTime="2025-12-11 08:25:30.761920406 +0000 UTC m=+155.021394332" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.766650 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" event={"ID":"07389d03-2315-4483-b6bc-c25d2fb69f53","Type":"ContainerStarted","Data":"ebcfcf0491d4f1e3b26815bb950a9a6f7db4feedd645f1ebac40a484c7a007ba"} Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.766703 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.826979 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zbc6l"] Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.829512 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-utilities\") pod \"community-operators-dkhhd\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.829626 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-catalog-content\") pod \"community-operators-dkhhd\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.829745 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6gqn\" (UniqueName: \"kubernetes.io/projected/4de0775a-dd54-436c-a5ff-fd6782a559a8-kube-api-access-h6gqn\") pod \"community-operators-dkhhd\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.829774 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:30 crc kubenswrapper[4992]: E1211 08:25:30.830115 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.330101789 +0000 UTC m=+155.589575715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.831516 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-utilities\") pod \"community-operators-dkhhd\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.833003 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-catalog-content\") pod \"community-operators-dkhhd\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.836772 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.855257 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cj6jr" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.856258 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.891347 4992 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4zftz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.891727 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" podUID="07389d03-2315-4483-b6bc-c25d2fb69f53" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.892510 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6gqn\" (UniqueName: \"kubernetes.io/projected/4de0775a-dd54-436c-a5ff-fd6782a559a8-kube-api-access-h6gqn\") pod \"community-operators-dkhhd\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.935534 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.935864 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-utilities\") pod \"certified-operators-zbc6l\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.936088 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvg6\" (UniqueName: \"kubernetes.io/projected/4a5362af-66f3-4482-8f2c-2f5748283eac-kube-api-access-5fvg6\") pod \"certified-operators-zbc6l\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.936165 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-catalog-content\") pod \"certified-operators-zbc6l\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:30 crc kubenswrapper[4992]: E1211 08:25:30.937825 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.437805733 +0000 UTC m=+155.697279649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.973104 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-78d4c" podStartSLOduration=135.973080941 podStartE2EDuration="2m15.973080941s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:30.913028887 +0000 UTC m=+155.172502813" watchObservedRunningTime="2025-12-11 08:25:30.973080941 +0000 UTC m=+155.232554867" Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.977740 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zbc6l"] Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.983764 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vtvvj"] Dec 11 08:25:30 crc kubenswrapper[4992]: I1211 08:25:30.985587 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.059562 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-utilities\") pod \"certified-operators-zbc6l\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.060294 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvg6\" (UniqueName: \"kubernetes.io/projected/4a5362af-66f3-4482-8f2c-2f5748283eac-kube-api-access-5fvg6\") pod \"certified-operators-zbc6l\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.060341 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-catalog-content\") pod \"certified-operators-zbc6l\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.060462 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:31 crc kubenswrapper[4992]: E1211 08:25:31.061300 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.561246691 +0000 UTC m=+155.820720617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.061461 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-utilities\") pod \"certified-operators-zbc6l\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.061784 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-catalog-content\") pod \"certified-operators-zbc6l\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.100925 4992 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gwnxp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.101003 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" podUID="fe3eedb0-f613-4104-ba42-a22301757402" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.111731 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:31 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:31 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:31 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.111827 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.132157 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vtvvj"] Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.153874 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvg6\" (UniqueName: \"kubernetes.io/projected/4a5362af-66f3-4482-8f2c-2f5748283eac-kube-api-access-5fvg6\") pod \"certified-operators-zbc6l\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.162462 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.162795 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54zs5\" (UniqueName: \"kubernetes.io/projected/b2ecf8e0-1db5-44c8-84d2-321e753bf872-kube-api-access-54zs5\") pod \"community-operators-vtvvj\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.162837 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-utilities\") pod \"community-operators-vtvvj\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.162899 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-catalog-content\") pod \"community-operators-vtvvj\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: E1211 08:25:31.163064 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.663042257 +0000 UTC m=+155.922516183 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.186200 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.237145 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mfdbv"] Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.238426 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.268815 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.268930 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54zs5\" (UniqueName: \"kubernetes.io/projected/b2ecf8e0-1db5-44c8-84d2-321e753bf872-kube-api-access-54zs5\") pod \"community-operators-vtvvj\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.268962 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-utilities\") pod \"community-operators-vtvvj\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.269010 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-catalog-content\") pod \"community-operators-vtvvj\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.270328 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-catalog-content\") pod \"community-operators-vtvvj\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: E1211 08:25:31.277097 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.777062021 +0000 UTC m=+156.036535947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.270719 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-utilities\") pod \"community-operators-vtvvj\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.298465 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfdbv"] Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.308084 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.361967 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54zs5\" (UniqueName: \"kubernetes.io/projected/b2ecf8e0-1db5-44c8-84d2-321e753bf872-kube-api-access-54zs5\") pod \"community-operators-vtvvj\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.369738 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.369947 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-catalog-content\") pod \"certified-operators-mfdbv\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.369982 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmqzj\" (UniqueName: \"kubernetes.io/projected/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-kube-api-access-rmqzj\") pod \"certified-operators-mfdbv\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.370066 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-utilities\") pod \"certified-operators-mfdbv\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: E1211 08:25:31.380605 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.880555082 +0000 UTC m=+156.140029008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.477551 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.477644 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-utilities\") pod \"certified-operators-mfdbv\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.477673 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-catalog-content\") pod \"certified-operators-mfdbv\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.477692 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmqzj\" (UniqueName: \"kubernetes.io/projected/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-kube-api-access-rmqzj\") pod \"certified-operators-mfdbv\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: E1211 08:25:31.478170 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:31.978158516 +0000 UTC m=+156.237632432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.478618 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-utilities\") pod \"certified-operators-mfdbv\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.478880 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-catalog-content\") pod \"certified-operators-mfdbv\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.553090 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmqzj\" (UniqueName: \"kubernetes.io/projected/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-kube-api-access-rmqzj\") pod \"certified-operators-mfdbv\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.557407 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" podStartSLOduration=136.557387035 podStartE2EDuration="2m16.557387035s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:31.555332279 +0000 UTC m=+155.814806225" watchObservedRunningTime="2025-12-11 08:25:31.557387035 +0000 UTC m=+155.816860961" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.587202 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:31 crc kubenswrapper[4992]: E1211 08:25:31.587602 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:32.087584976 +0000 UTC m=+156.347058902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.623158 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.666240 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" podStartSLOduration=136.666222709 podStartE2EDuration="2m16.666222709s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:31.665729916 +0000 UTC m=+155.925203842" watchObservedRunningTime="2025-12-11 08:25:31.666222709 +0000 UTC m=+155.925696635" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.689198 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:31 crc kubenswrapper[4992]: E1211 08:25:31.689528 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:32.189516226 +0000 UTC m=+156.448990152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.806790 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:31 crc kubenswrapper[4992]: E1211 08:25:31.807296 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:32.30727407 +0000 UTC m=+156.566747996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.807148 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:25:31 crc kubenswrapper[4992]: I1211 08:25:31.807621 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:31 crc kubenswrapper[4992]: E1211 08:25:31.808122 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:32.308105083 +0000 UTC m=+156.567579009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:31.981538 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:32 crc kubenswrapper[4992]: E1211 08:25:31.982202 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:32.482174421 +0000 UTC m=+156.741648347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.085688 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:32 crc kubenswrapper[4992]: E1211 08:25:32.086077 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:32.586065683 +0000 UTC m=+156.845539609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.096374 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" event={"ID":"007ce4d1-0b66-45c6-a6f3-dca7cd2dbd32","Type":"ContainerStarted","Data":"bb2ceafb0d092ab7febbc706fa96e85a5d0451178352ae58c26766caeb85bb91"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.098302 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:32 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:32 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:32 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.098367 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.190477 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.191701 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-9qd6n" podStartSLOduration=137.191685022 podStartE2EDuration="2m17.191685022s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:32.189855942 +0000 UTC m=+156.449329878" watchObservedRunningTime="2025-12-11 08:25:32.191685022 +0000 UTC m=+156.451158938" Dec 11 08:25:32 crc kubenswrapper[4992]: E1211 08:25:32.192315 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:32.692300328 +0000 UTC m=+156.951774254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.203506 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.203889 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" event={"ID":"ecb1dd13-cc68-4571-96ba-b0855381def6","Type":"ContainerStarted","Data":"994947c0a526e54061c4e8ea905ed867e73262173f6f9f6fdd56c0598c77149c"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.285197 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ttpck" podStartSLOduration=137.285179424 podStartE2EDuration="2m17.285179424s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:32.280740924 +0000 UTC m=+156.540214850" watchObservedRunningTime="2025-12-11 08:25:32.285179424 +0000 UTC m=+156.544653350" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.297410 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:32 crc kubenswrapper[4992]: E1211 08:25:32.299665 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:32.799654253 +0000 UTC m=+157.059128179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.368082 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf" event={"ID":"756b9d0e-106d-477a-b3a3-5c74ee4b5e54","Type":"ContainerStarted","Data":"d6b99b455d02ed728968e3cacfacbb33d85084433f78fd9c67fc8c3daa75dec4"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.369342 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.407807 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:32 crc kubenswrapper[4992]: E1211 08:25:32.408216 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:32.90820152 +0000 UTC m=+157.167675446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.422748 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-dgvjf" podStartSLOduration=137.422729691 podStartE2EDuration="2m17.422729691s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:32.421370104 +0000 UTC m=+156.680844030" watchObservedRunningTime="2025-12-11 08:25:32.422729691 +0000 UTC m=+156.682203617" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.435757 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6dwf8" event={"ID":"d293af14-4779-4bfc-a8ed-6cfed6974f57","Type":"ContainerStarted","Data":"c92a2832ff03fa4f3204fbc07d1336e70dfb5b18c151b2a7c0393e9145c10c91"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.533192 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2s2wb"] Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.534456 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.535111 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:32 crc kubenswrapper[4992]: E1211 08:25:32.536620 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:33.036606201 +0000 UTC m=+157.296080127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.545700 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.564730 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" event={"ID":"4f3cebfd-0a22-4492-8a8b-31bc3db6d184","Type":"ContainerStarted","Data":"14babe541adcc0645465d361378cc0a97963afe64e9a18c439fd91cd4d702b46"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.621725 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" event={"ID":"8d4cb9cf-5e0f-46a9-adce-c54aaef43120","Type":"ContainerStarted","Data":"5fa5e06c39b54e43814ffef80c89f35a8e297f3095974f10253703cfeb92cc3b"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.639282 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:32 crc kubenswrapper[4992]: E1211 08:25:32.639729 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:33.139712132 +0000 UTC m=+157.399186058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.671359 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" event={"ID":"06c6dbb7-21d9-44dd-bf69-752289a02ca4","Type":"ContainerStarted","Data":"48333398e44a00a02270790b5ede3e7ca7e2dc36e6d564db14589a5e5d66d5f7"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.742804 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-utilities\") pod \"redhat-marketplace-2s2wb\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.742854 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-catalog-content\") pod \"redhat-marketplace-2s2wb\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.742890 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wspxx\" (UniqueName: \"kubernetes.io/projected/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-kube-api-access-wspxx\") pod \"redhat-marketplace-2s2wb\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.742931 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:32 crc kubenswrapper[4992]: E1211 08:25:32.743209 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:33.243197764 +0000 UTC m=+157.502671690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.759445 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wt2b6" event={"ID":"7ba830cd-271c-4672-91e5-37a40ce9b87e","Type":"ContainerStarted","Data":"8e460527ff8413c0efb4c72edca2c8b9452a032323812bc850ff2b4c969b8111"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.760576 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.827955 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" event={"ID":"f495c66f-c76e-41c7-a70b-71c7a19c8c6a","Type":"ContainerStarted","Data":"5cabcf2b59324ed0868ee176231e4ec47998ece33db6669c26ac8e83ee3adc29"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.881122 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.881466 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wspxx\" (UniqueName: \"kubernetes.io/projected/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-kube-api-access-wspxx\") pod \"redhat-marketplace-2s2wb\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.881707 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-utilities\") pod \"redhat-marketplace-2s2wb\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.881744 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-catalog-content\") pod \"redhat-marketplace-2s2wb\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.882277 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-catalog-content\") pod \"redhat-marketplace-2s2wb\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:32 crc kubenswrapper[4992]: E1211 08:25:32.883719 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:33.383684099 +0000 UTC m=+157.643158025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.884539 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-utilities\") pod \"redhat-marketplace-2s2wb\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.942071 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wt2b6" podStartSLOduration=12.942036398 podStartE2EDuration="12.942036398s" podCreationTimestamp="2025-12-11 08:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:32.935129562 +0000 UTC m=+157.194603488" watchObservedRunningTime="2025-12-11 08:25:32.942036398 +0000 UTC m=+157.201510324" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.944346 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bbzdz" podStartSLOduration=137.944335979 podStartE2EDuration="2m17.944335979s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:32.796130346 +0000 UTC m=+157.055604272" watchObservedRunningTime="2025-12-11 08:25:32.944335979 +0000 UTC m=+157.203809905" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.955097 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" event={"ID":"e73e5676-61a1-4154-8943-fd3181ec993c","Type":"ContainerStarted","Data":"f0e4fd3c553b6a9980dd26a66eef2a239466c096fdd1fdb38c9eee3b5a87fa23"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.955168 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" event={"ID":"e73e5676-61a1-4154-8943-fd3181ec993c","Type":"ContainerStarted","Data":"63c1fbad8b339ca82ec81f480902cd94d8f3a4aaaa7be5fc42b2eccdabd0fddd"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.956090 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.965932 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zjl92" event={"ID":"3a55f3a6-32d8-41da-b6c7-fa6fc282ae16","Type":"ContainerStarted","Data":"63ce43d6abf3d2fb4e486054c7d4412ff76b4f0ae6cfe706fc27b9d56c5038bf"} Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.977146 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2696"] Dec 11 08:25:32 crc kubenswrapper[4992]: I1211 08:25:32.996366 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.000750 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xh2\" (UniqueName: \"kubernetes.io/projected/ce3e165d-68f9-42f9-bfca-d08aa820f146-kube-api-access-49xh2\") pod \"redhat-marketplace-c2696\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.000794 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.000952 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-catalog-content\") pod \"redhat-marketplace-c2696\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.001097 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-utilities\") pod \"redhat-marketplace-c2696\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.007913 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" event={"ID":"21edf877-bf36-4cb4-8fcd-43751d4c4a04","Type":"ContainerStarted","Data":"30964ce4e45805e8aaacb7420f22dde83eebe962155388cc32ce65487131a75b"} Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.022795 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:33 crc kubenswrapper[4992]: E1211 08:25:33.029940 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:33.529894088 +0000 UTC m=+157.789368014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.071824 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" event={"ID":"7f77b180-f28c-472b-a577-44ef5012100c","Type":"ContainerStarted","Data":"73ee6ce87191660ed6efade91710849a53d5b6b984a65a78decf31bf51134904"} Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.078233 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" event={"ID":"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf","Type":"ContainerStarted","Data":"3659d75ec18cfbe2e1d45ce9d18f97f5f90b9d1b8f3d4bcca5c6b8bb6bb6073d"} Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.097855 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wspxx\" (UniqueName: \"kubernetes.io/projected/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-kube-api-access-wspxx\") pod \"redhat-marketplace-2s2wb\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.111210 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.111567 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-catalog-content\") pod \"redhat-marketplace-c2696\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.111645 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-utilities\") pod \"redhat-marketplace-c2696\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.111703 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xh2\" (UniqueName: \"kubernetes.io/projected/ce3e165d-68f9-42f9-bfca-d08aa820f146-kube-api-access-49xh2\") pod \"redhat-marketplace-c2696\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.112313 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.112367 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.112779 4992 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4zftz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.112800 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" podUID="07389d03-2315-4483-b6bc-c25d2fb69f53" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.117883 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2s2wb"] Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.141555 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.160785 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:33 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:33 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:33 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.160832 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.173971 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-catalog-content\") pod \"redhat-marketplace-c2696\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.197574 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.197913 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.209079 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2696"] Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.209854 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-c8tgl" podStartSLOduration=138.209836894 podStartE2EDuration="2m18.209836894s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:33.061659602 +0000 UTC m=+157.321133538" watchObservedRunningTime="2025-12-11 08:25:33.209836894 +0000 UTC m=+157.469310820" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.216561 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-utilities\") pod \"redhat-marketplace-c2696\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: E1211 08:25:33.253616 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:33.75358568 +0000 UTC m=+158.013059626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.253836 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:33 crc kubenswrapper[4992]: E1211 08:25:33.255239 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:33.755215393 +0000 UTC m=+158.014689309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.285851 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xh2\" (UniqueName: \"kubernetes.io/projected/ce3e165d-68f9-42f9-bfca-d08aa820f146-kube-api-access-49xh2\") pod \"redhat-marketplace-c2696\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.302009 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.375675 4992 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j4vw2 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.375772 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" podUID="06c6dbb7-21d9-44dd-bf69-752289a02ca4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.375910 4992 patch_prober.go:28] interesting pod/console-f9d7485db-9lslc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.375926 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9lslc" podUID="3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.376913 4992 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j4vw2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.377001 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" podUID="06c6dbb7-21d9-44dd-bf69-752289a02ca4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.377749 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.377785 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.379171 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:33 crc kubenswrapper[4992]: E1211 08:25:33.380313 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:33.880279645 +0000 UTC m=+158.139753741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.393481 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.393826 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.508461 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:33 crc kubenswrapper[4992]: E1211 08:25:33.509796 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.009776175 +0000 UTC m=+158.269250101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.565958 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.625510 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:33 crc kubenswrapper[4992]: E1211 08:25:33.625937 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.125919426 +0000 UTC m=+158.385393352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.730247 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:33 crc kubenswrapper[4992]: E1211 08:25:33.730839 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.230820456 +0000 UTC m=+158.490294382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.831370 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:33 crc kubenswrapper[4992]: E1211 08:25:33.831885 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.331867941 +0000 UTC m=+158.591341867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:33 crc kubenswrapper[4992]: I1211 08:25:33.933859 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:33 crc kubenswrapper[4992]: E1211 08:25:33.934202 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.434189451 +0000 UTC m=+158.693663377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:33.992606 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" podStartSLOduration=138.992589901 podStartE2EDuration="2m18.992589901s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:33.447011298 +0000 UTC m=+157.706485224" watchObservedRunningTime="2025-12-11 08:25:33.992589901 +0000 UTC m=+158.252063827" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.014128 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9jmlp" podStartSLOduration=139.014109579 podStartE2EDuration="2m19.014109579s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:33.996743352 +0000 UTC m=+158.256217278" watchObservedRunningTime="2025-12-11 08:25:34.014109579 +0000 UTC m=+158.273583505" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.015392 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zbc6l"] Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.022836 4992 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2vrd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.022892 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" podUID="21edf877-bf36-4cb4-8fcd-43751d4c4a04" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.038183 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.038767 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.538752021 +0000 UTC m=+158.798225947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.040859 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.041157 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.541145926 +0000 UTC m=+158.800619852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.090113 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccvgp"] Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.091786 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.091945 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.093012 4992 generic.go:334] "Generic (PLEG): container finished" podID="7656745a-706d-4652-9db6-e94237d4999c" containerID="e87c6cb69fa890883f967f56ad79b6d456b8721a4dd9a7fb5b3ef71cf0a40010" exitCode=0 Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.093062 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" event={"ID":"7656745a-706d-4652-9db6-e94237d4999c","Type":"ContainerDied","Data":"e87c6cb69fa890883f967f56ad79b6d456b8721a4dd9a7fb5b3ef71cf0a40010"} Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.094181 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dkhhd"] Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.104697 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.142733 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.143240 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.643201288 +0000 UTC m=+158.902675214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.143390 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-catalog-content\") pod \"redhat-operators-ccvgp\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.143449 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljmd\" (UniqueName: \"kubernetes.io/projected/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-kube-api-access-dljmd\") pod \"redhat-operators-ccvgp\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.143530 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.143709 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-utilities\") pod \"redhat-operators-ccvgp\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.145462 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.645452589 +0000 UTC m=+158.904926515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.166391 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" podStartSLOduration=139.166362501 podStartE2EDuration="2m19.166362501s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:34.163120194 +0000 UTC m=+158.422594120" watchObservedRunningTime="2025-12-11 08:25:34.166362501 +0000 UTC m=+158.425836427" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.239744 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccvgp"] Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.239782 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" event={"ID":"eea4eb1b-17a9-468f-981b-b26d90c75221","Type":"ContainerStarted","Data":"baa4886bacb828861c62cc9f828868707465192feb3a62114e4ade7872b6baa4"} Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.239815 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mfdbv"] Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.239830 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vtvvj"] Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.239842 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" event={"ID":"eea4eb1b-17a9-468f-981b-b26d90c75221","Type":"ContainerStarted","Data":"323ca7ca9ba4dbb110de33da7e99c075bbe086ef29b9e124efbeb7852dbe605f"} Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.244049 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:34 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:34 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:34 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.244111 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.255425 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.256204 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-catalog-content\") pod \"redhat-operators-ccvgp\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.256278 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dljmd\" (UniqueName: \"kubernetes.io/projected/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-kube-api-access-dljmd\") pod \"redhat-operators-ccvgp\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.259307 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.759271757 +0000 UTC m=+159.018745683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.271937 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-utilities\") pod \"redhat-operators-ccvgp\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.282867 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-catalog-content\") pod \"redhat-operators-ccvgp\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.287121 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-utilities\") pod \"redhat-operators-ccvgp\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.291422 4992 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4zftz container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.291466 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" podUID="07389d03-2315-4483-b6bc-c25d2fb69f53" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.291547 4992 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4zftz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.291574 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" podUID="07389d03-2315-4483-b6bc-c25d2fb69f53" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.377775 4992 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-j4vw2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.377904 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" podUID="06c6dbb7-21d9-44dd-bf69-752289a02ca4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.396936 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.397499 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:34.897477142 +0000 UTC m=+159.156951068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.517693 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.518082 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.018060302 +0000 UTC m=+159.277534228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.626850 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.627372 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.12735548 +0000 UTC m=+159.386829406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.736273 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.736815 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.236792901 +0000 UTC m=+159.496266827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.759519 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dljmd\" (UniqueName: \"kubernetes.io/projected/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-kube-api-access-dljmd\") pod \"redhat-operators-ccvgp\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.826334 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lkzhk"] Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.827439 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.837798 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.838274 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.338255178 +0000 UTC m=+159.597729104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.859195 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.948383 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.948546 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-utilities\") pod \"redhat-operators-lkzhk\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.948572 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm6tn\" (UniqueName: \"kubernetes.io/projected/e27af7be-51b7-40ad-a740-9f9cc14fa328-kube-api-access-pm6tn\") pod \"redhat-operators-lkzhk\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:34 crc kubenswrapper[4992]: I1211 08:25:34.948604 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-catalog-content\") pod \"redhat-operators-lkzhk\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:34 crc kubenswrapper[4992]: E1211 08:25:34.948850 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.448821609 +0000 UTC m=+159.708295525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.075700 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-utilities\") pod \"redhat-operators-lkzhk\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.075769 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm6tn\" (UniqueName: \"kubernetes.io/projected/e27af7be-51b7-40ad-a740-9f9cc14fa328-kube-api-access-pm6tn\") pod \"redhat-operators-lkzhk\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.075806 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.075832 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-catalog-content\") pod \"redhat-operators-lkzhk\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.076567 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-catalog-content\") pod \"redhat-operators-lkzhk\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.079203 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-utilities\") pod \"redhat-operators-lkzhk\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:35 crc kubenswrapper[4992]: E1211 08:25:35.080099 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.580079237 +0000 UTC m=+159.839553163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.103938 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:35 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:35 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:35 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.104541 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.121893 4992 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2vrd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.121981 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" podUID="21edf877-bf36-4cb4-8fcd-43751d4c4a04" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.137915 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkzhk"] Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.180606 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:35 crc kubenswrapper[4992]: E1211 08:25:35.181496 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.681470782 +0000 UTC m=+159.940944708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.183370 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm6tn\" (UniqueName: \"kubernetes.io/projected/e27af7be-51b7-40ad-a740-9f9cc14fa328-kube-api-access-pm6tn\") pod \"redhat-operators-lkzhk\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.183955 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-j4vw2" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.222050 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtvvj" event={"ID":"b2ecf8e0-1db5-44c8-84d2-321e753bf872","Type":"ContainerStarted","Data":"64eb67eba2c331a87d1e0de21468251cb4cb47fd8ff99f1c4abba07bf651ae03"} Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.244054 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2s2wb"] Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.249220 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2696"] Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.265085 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfdbv" event={"ID":"fa659f5d-c90f-4aa0-aacb-79889eb26e8e","Type":"ContainerStarted","Data":"c3d3f23c3b1449db279d48a8e22ff9ff93eaa11c875f75b6e9e2e8daaf07bb30"} Dec 11 08:25:35 crc kubenswrapper[4992]: W1211 08:25:35.276820 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce3e165d_68f9_42f9_bfca_d08aa820f146.slice/crio-2c97c9554c836e41b04a651fe30685120f1ecc12ae29efba5d78f88f8fa405aa WatchSource:0}: Error finding container 2c97c9554c836e41b04a651fe30685120f1ecc12ae29efba5d78f88f8fa405aa: Status 404 returned error can't find the container with id 2c97c9554c836e41b04a651fe30685120f1ecc12ae29efba5d78f88f8fa405aa Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.283659 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:35 crc kubenswrapper[4992]: E1211 08:25:35.284228 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.784202352 +0000 UTC m=+160.043676278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.286806 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" podStartSLOduration=140.286779012 podStartE2EDuration="2m20.286779012s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:35.281265453 +0000 UTC m=+159.540739379" watchObservedRunningTime="2025-12-11 08:25:35.286779012 +0000 UTC m=+159.546252928" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.287723 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkhhd" event={"ID":"4de0775a-dd54-436c-a5ff-fd6782a559a8","Type":"ContainerStarted","Data":"9d66f06cd460c49b4e319e9589fb29da3fcdc30127908509c403187b681584dd"} Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.292622 4992 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2vrd2 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.292723 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" podUID="21edf877-bf36-4cb4-8fcd-43751d4c4a04" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.330362 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" event={"ID":"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf","Type":"ContainerStarted","Data":"bde6dddeaa783ad62fb7585f4bb95ac17dc0f38a629b4df3ee0b7a899cdb0cfb"} Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.355089 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbc6l" event={"ID":"4a5362af-66f3-4482-8f2c-2f5748283eac","Type":"ContainerStarted","Data":"d94f7f1c5d133cc49e22a31d2540d43c6dfef053f5429f745536d1ba594f619b"} Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.386587 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:35 crc kubenswrapper[4992]: E1211 08:25:35.387872 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.887839887 +0000 UTC m=+160.147313813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.395327 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.395423 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.415817 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.488046 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:35 crc kubenswrapper[4992]: E1211 08:25:35.492029 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:35.992015738 +0000 UTC m=+160.251489664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.535521 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-g5d6r" podStartSLOduration=140.535498986 podStartE2EDuration="2m20.535498986s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:35.468088285 +0000 UTC m=+159.727562211" watchObservedRunningTime="2025-12-11 08:25:35.535498986 +0000 UTC m=+159.794972912" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.589412 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:35 crc kubenswrapper[4992]: E1211 08:25:35.590007 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:36.08998521 +0000 UTC m=+160.349459136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.656491 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2vrd2" Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.700416 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:35 crc kubenswrapper[4992]: E1211 08:25:35.700925 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:36.200909861 +0000 UTC m=+160.460383787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.803264 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:35 crc kubenswrapper[4992]: E1211 08:25:35.803686 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:36.303671023 +0000 UTC m=+160.563144949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:35 crc kubenswrapper[4992]: I1211 08:25:35.918811 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:35 crc kubenswrapper[4992]: E1211 08:25:35.919399 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:36.419387523 +0000 UTC m=+160.678861449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.021551 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.022027 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:36.52200352 +0000 UTC m=+160.781477446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.107696 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:36 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:36 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:36 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.107763 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.124057 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.124652 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:36.624622029 +0000 UTC m=+160.884095945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.230253 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.230760 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:36.73073657 +0000 UTC m=+160.990210496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.333660 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.334029 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:36.834017876 +0000 UTC m=+161.093491792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.431873 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2s2wb" event={"ID":"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37","Type":"ContainerStarted","Data":"9b85b7070868dfe8ad8755f1bb93e36aaa3490b51d63d3a1ce6a23c13f32bded"} Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.436301 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.436703 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:36.936686555 +0000 UTC m=+161.196160481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.463873 4992 generic.go:334] "Generic (PLEG): container finished" podID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerID="f9cef8b459a326ed06e002ee39169b8a0f1a29c6e1a9546e91c3502822fff0cb" exitCode=0 Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.464286 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtvvj" event={"ID":"b2ecf8e0-1db5-44c8-84d2-321e753bf872","Type":"ContainerDied","Data":"f9cef8b459a326ed06e002ee39169b8a0f1a29c6e1a9546e91c3502822fff0cb"} Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.474121 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.484768 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerID="bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937" exitCode=0 Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.484891 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfdbv" event={"ID":"fa659f5d-c90f-4aa0-aacb-79889eb26e8e","Type":"ContainerDied","Data":"bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937"} Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.537860 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.538233 4992 generic.go:334] "Generic (PLEG): container finished" podID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerID="b7d811624a5a8e3d8c044bcd5c2a5a38e5480b14337179540e9de4de113c3525" exitCode=0 Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.538310 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkhhd" event={"ID":"4de0775a-dd54-436c-a5ff-fd6782a559a8","Type":"ContainerDied","Data":"b7d811624a5a8e3d8c044bcd5c2a5a38e5480b14337179540e9de4de113c3525"} Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.538335 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.038318566 +0000 UTC m=+161.297792492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.584023 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2696" event={"ID":"ce3e165d-68f9-42f9-bfca-d08aa820f146","Type":"ContainerStarted","Data":"2c97c9554c836e41b04a651fe30685120f1ecc12ae29efba5d78f88f8fa405aa"} Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.585804 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccvgp"] Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.611544 4992 generic.go:334] "Generic (PLEG): container finished" podID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerID="6eab330ad7b6f9cc0b95c8788d6bd1c8a5fecddd72b4a103a615df81b666c925" exitCode=0 Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.612407 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbc6l" event={"ID":"4a5362af-66f3-4482-8f2c-2f5748283eac","Type":"ContainerDied","Data":"6eab330ad7b6f9cc0b95c8788d6bd1c8a5fecddd72b4a103a615df81b666c925"} Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.639020 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.639190 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.139159476 +0000 UTC m=+161.398633402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.639478 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.640560 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.140551754 +0000 UTC m=+161.400025680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.692896 4992 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.740844 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.742661 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.242590366 +0000 UTC m=+161.502064292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.842491 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:36 crc kubenswrapper[4992]: E1211 08:25:36.842939 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.342923883 +0000 UTC m=+161.602397809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:36 crc kubenswrapper[4992]: W1211 08:25:36.927358 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a7d128f_32e9_47c5_bac4_6e94898ea0b7.slice/crio-08145902ca054443f7200a4fdac182e23ac0e254f24dc83d5b70e76188b60264 WatchSource:0}: Error finding container 08145902ca054443f7200a4fdac182e23ac0e254f24dc83d5b70e76188b60264: Status 404 returned error can't find the container with id 08145902ca054443f7200a4fdac182e23ac0e254f24dc83d5b70e76188b60264 Dec 11 08:25:36 crc kubenswrapper[4992]: I1211 08:25:36.982179 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:36.998520 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.482780041 +0000 UTC m=+161.742253967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.083457 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.083522 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:37.085219 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.585206644 +0000 UTC m=+161.844680570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.110799 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:37 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:37 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:37 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.111180 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.112443 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1b67a6a3-6d97-4b58-96d9-f0909df30802-metrics-certs\") pod \"network-metrics-daemon-j68fr\" (UID: \"1b67a6a3-6d97-4b58-96d9-f0909df30802\") " pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.125175 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-j68fr" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.184599 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:37.185013 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.684994216 +0000 UTC m=+161.944468142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.212861 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.231320 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkzhk"] Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.291018 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5vgk\" (UniqueName: \"kubernetes.io/projected/7656745a-706d-4652-9db6-e94237d4999c-kube-api-access-m5vgk\") pod \"7656745a-706d-4652-9db6-e94237d4999c\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.291182 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7656745a-706d-4652-9db6-e94237d4999c-secret-volume\") pod \"7656745a-706d-4652-9db6-e94237d4999c\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.291224 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7656745a-706d-4652-9db6-e94237d4999c-config-volume\") pod \"7656745a-706d-4652-9db6-e94237d4999c\" (UID: \"7656745a-706d-4652-9db6-e94237d4999c\") " Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.291523 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:37.291915 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.791899299 +0000 UTC m=+162.051373435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.292657 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7656745a-706d-4652-9db6-e94237d4999c-config-volume" (OuterVolumeSpecName: "config-volume") pod "7656745a-706d-4652-9db6-e94237d4999c" (UID: "7656745a-706d-4652-9db6-e94237d4999c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.306490 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7656745a-706d-4652-9db6-e94237d4999c-kube-api-access-m5vgk" (OuterVolumeSpecName: "kube-api-access-m5vgk") pod "7656745a-706d-4652-9db6-e94237d4999c" (UID: "7656745a-706d-4652-9db6-e94237d4999c"). InnerVolumeSpecName "kube-api-access-m5vgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.313918 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7656745a-706d-4652-9db6-e94237d4999c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7656745a-706d-4652-9db6-e94237d4999c" (UID: "7656745a-706d-4652-9db6-e94237d4999c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.393113 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.393852 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5vgk\" (UniqueName: \"kubernetes.io/projected/7656745a-706d-4652-9db6-e94237d4999c-kube-api-access-m5vgk\") on node \"crc\" DevicePath \"\"" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.393872 4992 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7656745a-706d-4652-9db6-e94237d4999c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.393884 4992 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7656745a-706d-4652-9db6-e94237d4999c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:37.393968 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.893948341 +0000 UTC m=+162.153422267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.412736 4992 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-11T08:25:36.693406745Z","Handler":null,"Name":""} Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.494919 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:37.495314 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:37.995300556 +0000 UTC m=+162.254774482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.595412 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:37.595730 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:38.095622861 +0000 UTC m=+162.355096787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.595752 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:37.596115 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:38.096093293 +0000 UTC m=+162.355567219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.706982 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:37.707930 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 08:25:38.207901728 +0000 UTC m=+162.467375654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.717114 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" event={"ID":"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf","Type":"ContainerStarted","Data":"bb83d854a32f031d4be8f0794d4baa6b175f5bda55fef3a37684110211020e7f"} Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.717205 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" event={"ID":"f6a23b4c-1e9b-47f5-86a1-c9c1ab25e5bf","Type":"ContainerStarted","Data":"dcacccb82d0a9d8b94eeed1c9520a987d2cf2b1202867a86ec258b3767a58c3c"} Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.720865 4992 generic.go:334] "Generic (PLEG): container finished" podID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerID="b71bf3a90e32be049913e290acb60082759f3dc8a1fbb10651c6f42ce0acd8c2" exitCode=0 Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.720977 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccvgp" event={"ID":"5a7d128f-32e9-47c5-bac4-6e94898ea0b7","Type":"ContainerDied","Data":"b71bf3a90e32be049913e290acb60082759f3dc8a1fbb10651c6f42ce0acd8c2"} Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.721011 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccvgp" event={"ID":"5a7d128f-32e9-47c5-bac4-6e94898ea0b7","Type":"ContainerStarted","Data":"08145902ca054443f7200a4fdac182e23ac0e254f24dc83d5b70e76188b60264"} Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.727418 4992 generic.go:334] "Generic (PLEG): container finished" podID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerID="94628c3f9c191e37ec65ae569da9e77cca8d705e9064355ad325dd682389c832" exitCode=0 Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.727512 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2s2wb" event={"ID":"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37","Type":"ContainerDied","Data":"94628c3f9c191e37ec65ae569da9e77cca8d705e9064355ad325dd682389c832"} Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.758026 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mw5hz" podStartSLOduration=17.758005645 podStartE2EDuration="17.758005645s" podCreationTimestamp="2025-12-11 08:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:37.754766878 +0000 UTC m=+162.014240814" watchObservedRunningTime="2025-12-11 08:25:37.758005645 +0000 UTC m=+162.017479571" Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.795429 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzhk" event={"ID":"e27af7be-51b7-40ad-a740-9f9cc14fa328","Type":"ContainerStarted","Data":"344ad35e191cc595eecded2b98448bc8a639f48302c956e9bbd02f9df7868249"} Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.815523 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:37 crc kubenswrapper[4992]: E1211 08:25:37.815949 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 08:25:38.315932853 +0000 UTC m=+162.575406779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-j86m7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.817861 4992 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.817909 4992 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.916605 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 08:25:37 crc kubenswrapper[4992]: I1211 08:25:37.921949 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.009106 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" event={"ID":"7656745a-706d-4652-9db6-e94237d4999c","Type":"ContainerDied","Data":"5d3bdf97a3e617496a1ac79fc0233eceb9dfe55636280ffec13c1dc71ac1ac4a"} Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.009779 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d3bdf97a3e617496a1ac79fc0233eceb9dfe55636280ffec13c1dc71ac1ac4a" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.009907 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.021956 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.027363 4992 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.027420 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.028204 4992 generic.go:334] "Generic (PLEG): container finished" podID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerID="1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f" exitCode=0 Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.028264 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2696" event={"ID":"ce3e165d-68f9-42f9-bfca-d08aa820f146","Type":"ContainerDied","Data":"1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f"} Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.097467 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:38 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:38 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:38 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.097554 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.104386 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.257426 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-j68fr"] Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.281240 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-j86m7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.458294 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.460733 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.512284 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.866455 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 08:25:38 crc kubenswrapper[4992]: E1211 08:25:38.867108 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7656745a-706d-4652-9db6-e94237d4999c" containerName="collect-profiles" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.867126 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7656745a-706d-4652-9db6-e94237d4999c" containerName="collect-profiles" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.867265 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7656745a-706d-4652-9db6-e94237d4999c" containerName="collect-profiles" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.867786 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.870212 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.870422 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 11 08:25:38 crc kubenswrapper[4992]: I1211 08:25:38.878902 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.001316 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.001385 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.055090 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j68fr" event={"ID":"1b67a6a3-6d97-4b58-96d9-f0909df30802","Type":"ContainerStarted","Data":"bbfd9c3bc2ffcec5e152b60a73fe0e3b82841924905c692cd919d245cab2f5ab"} Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.060159 4992 generic.go:334] "Generic (PLEG): container finished" podID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerID="f912d88e95b0b120b7d9d2f3b8873249a2e82c3d89cbd08f007f517b4ee12734" exitCode=0 Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.062146 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzhk" event={"ID":"e27af7be-51b7-40ad-a740-9f9cc14fa328","Type":"ContainerDied","Data":"f912d88e95b0b120b7d9d2f3b8873249a2e82c3d89cbd08f007f517b4ee12734"} Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.113737 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.113778 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.114469 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.174513 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wt2b6" Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.184058 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:39 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:39 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:39 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.184133 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.237843 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:39 crc kubenswrapper[4992]: I1211 08:25:39.870562 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:39.986592 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j86m7"] Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.099819 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:40 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:40 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:40 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.100150 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.230771 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" event={"ID":"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7","Type":"ContainerStarted","Data":"9848e08e8d1a913d1002d8151ee71d8bcfd642f4bfe68c2aea386b663bba82c3"} Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.292789 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.293509 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.296013 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.296164 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.302356 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.332029 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.332116 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.435102 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.435149 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.435276 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.486224 4992 patch_prober.go:28] interesting pod/apiserver-76f77b778f-k7jwn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]log ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]etcd ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/max-in-flight-filter ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 08:25:40 crc kubenswrapper[4992]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 11 08:25:40 crc kubenswrapper[4992]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/openshift.io-startinformers ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 08:25:40 crc kubenswrapper[4992]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 08:25:40 crc kubenswrapper[4992]: livez check failed Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.486329 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" podUID="eea4eb1b-17a9-468f-981b-b26d90c75221" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.495356 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.754879 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:40 crc kubenswrapper[4992]: I1211 08:25:40.867840 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 08:25:41 crc kubenswrapper[4992]: I1211 08:25:41.119977 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:41 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:41 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:41 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:41 crc kubenswrapper[4992]: I1211 08:25:41.120033 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:41 crc kubenswrapper[4992]: I1211 08:25:41.243520 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f906dd06-439e-495f-aaf9-fe2cd934ab0e","Type":"ContainerStarted","Data":"e53b086b90cae8668f65339b9ee5c189ee239fb6e8d60a72ee793ffb4293682c"} Dec 11 08:25:41 crc kubenswrapper[4992]: I1211 08:25:41.249885 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" event={"ID":"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7","Type":"ContainerStarted","Data":"eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab"} Dec 11 08:25:41 crc kubenswrapper[4992]: I1211 08:25:41.250902 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:25:41 crc kubenswrapper[4992]: I1211 08:25:41.267758 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j68fr" event={"ID":"1b67a6a3-6d97-4b58-96d9-f0909df30802","Type":"ContainerStarted","Data":"26becbf8c564b026777a788a74293c13fcf3a9d60b9bc49bf2a6d5b91c5b4c68"} Dec 11 08:25:41 crc kubenswrapper[4992]: I1211 08:25:41.293483 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" podStartSLOduration=146.29345302 podStartE2EDuration="2m26.29345302s" podCreationTimestamp="2025-12-11 08:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:41.283611655 +0000 UTC m=+165.543085601" watchObservedRunningTime="2025-12-11 08:25:41.29345302 +0000 UTC m=+165.552926946" Dec 11 08:25:41 crc kubenswrapper[4992]: I1211 08:25:41.294392 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 08:25:41 crc kubenswrapper[4992]: W1211 08:25:41.471939 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7151fe77_0ddc_43bd_b0bb_6e557b82ec7b.slice/crio-cd2620bc2c9e377a1980b94cd3f09578f81ba6e628e76d32b7a77540d64917b9 WatchSource:0}: Error finding container cd2620bc2c9e377a1980b94cd3f09578f81ba6e628e76d32b7a77540d64917b9: Status 404 returned error can't find the container with id cd2620bc2c9e377a1980b94cd3f09578f81ba6e628e76d32b7a77540d64917b9 Dec 11 08:25:42 crc kubenswrapper[4992]: I1211 08:25:42.123930 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:42 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:42 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:42 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:42 crc kubenswrapper[4992]: I1211 08:25:42.124011 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:42 crc kubenswrapper[4992]: I1211 08:25:42.348353 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b","Type":"ContainerStarted","Data":"cd2620bc2c9e377a1980b94cd3f09578f81ba6e628e76d32b7a77540d64917b9"} Dec 11 08:25:42 crc kubenswrapper[4992]: I1211 08:25:42.375500 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-j68fr" event={"ID":"1b67a6a3-6d97-4b58-96d9-f0909df30802","Type":"ContainerStarted","Data":"7f1c0d3e0ba0278323d5f5a4b21c20127b19bd7ca7e0d7927665fe216dcf648f"} Dec 11 08:25:42 crc kubenswrapper[4992]: I1211 08:25:42.413343 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-j68fr" podStartSLOduration=148.413321186 podStartE2EDuration="2m28.413321186s" podCreationTimestamp="2025-12-11 08:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:42.399800472 +0000 UTC m=+166.659274398" watchObservedRunningTime="2025-12-11 08:25:42.413321186 +0000 UTC m=+166.672795112" Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.096964 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:43 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:43 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:43 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.097032 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.178075 4992 patch_prober.go:28] interesting pod/console-f9d7485db-9lslc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.178145 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9lslc" podUID="3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.365381 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.365462 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.365443 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.366778 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.384031 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f906dd06-439e-495f-aaf9-fe2cd934ab0e","Type":"ContainerStarted","Data":"f01df32e4457c006cfc1955fb1c0c26b9936aa2b389ef58428ee7629a4cdf383"} Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.395532 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b","Type":"ContainerStarted","Data":"a0bed22b795fb083cb51897f30109481ff041a75202d66a4b58ed43ae8e1bf30"} Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.407457 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.407434552 podStartE2EDuration="5.407434552s" podCreationTimestamp="2025-12-11 08:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:43.402230752 +0000 UTC m=+167.661704668" watchObservedRunningTime="2025-12-11 08:25:43.407434552 +0000 UTC m=+167.666908478" Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.413144 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.418923 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-k7jwn" Dec 11 08:25:43 crc kubenswrapper[4992]: I1211 08:25:43.426827 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.426773892 podStartE2EDuration="3.426773892s" podCreationTimestamp="2025-12-11 08:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:25:43.424360067 +0000 UTC m=+167.683834033" watchObservedRunningTime="2025-12-11 08:25:43.426773892 +0000 UTC m=+167.686247818" Dec 11 08:25:44 crc kubenswrapper[4992]: I1211 08:25:44.093689 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:44 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:44 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:44 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:44 crc kubenswrapper[4992]: I1211 08:25:44.094173 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:44 crc kubenswrapper[4992]: I1211 08:25:44.291259 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:25:44 crc kubenswrapper[4992]: I1211 08:25:44.422649 4992 generic.go:334] "Generic (PLEG): container finished" podID="f906dd06-439e-495f-aaf9-fe2cd934ab0e" containerID="f01df32e4457c006cfc1955fb1c0c26b9936aa2b389ef58428ee7629a4cdf383" exitCode=0 Dec 11 08:25:44 crc kubenswrapper[4992]: I1211 08:25:44.423158 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f906dd06-439e-495f-aaf9-fe2cd934ab0e","Type":"ContainerDied","Data":"f01df32e4457c006cfc1955fb1c0c26b9936aa2b389ef58428ee7629a4cdf383"} Dec 11 08:25:44 crc kubenswrapper[4992]: I1211 08:25:44.426205 4992 generic.go:334] "Generic (PLEG): container finished" podID="7151fe77-0ddc-43bd-b0bb-6e557b82ec7b" containerID="a0bed22b795fb083cb51897f30109481ff041a75202d66a4b58ed43ae8e1bf30" exitCode=0 Dec 11 08:25:44 crc kubenswrapper[4992]: I1211 08:25:44.427231 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b","Type":"ContainerDied","Data":"a0bed22b795fb083cb51897f30109481ff041a75202d66a4b58ed43ae8e1bf30"} Dec 11 08:25:45 crc kubenswrapper[4992]: I1211 08:25:45.092042 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:45 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:45 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:45 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:45 crc kubenswrapper[4992]: I1211 08:25:45.092110 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.093756 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:46 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:46 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:46 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.094246 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.110873 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.118802 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.185188 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kube-api-access\") pod \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\" (UID: \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\") " Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.185275 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kubelet-dir\") pod \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\" (UID: \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\") " Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.185295 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kubelet-dir\") pod \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\" (UID: \"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b\") " Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.185347 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kube-api-access\") pod \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\" (UID: \"f906dd06-439e-495f-aaf9-fe2cd934ab0e\") " Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.185443 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7151fe77-0ddc-43bd-b0bb-6e557b82ec7b" (UID: "7151fe77-0ddc-43bd-b0bb-6e557b82ec7b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.185443 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f906dd06-439e-495f-aaf9-fe2cd934ab0e" (UID: "f906dd06-439e-495f-aaf9-fe2cd934ab0e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.186748 4992 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.186788 4992 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.195933 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f906dd06-439e-495f-aaf9-fe2cd934ab0e" (UID: "f906dd06-439e-495f-aaf9-fe2cd934ab0e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.198891 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7151fe77-0ddc-43bd-b0bb-6e557b82ec7b" (UID: "7151fe77-0ddc-43bd-b0bb-6e557b82ec7b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.288141 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f906dd06-439e-495f-aaf9-fe2cd934ab0e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.288181 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7151fe77-0ddc-43bd-b0bb-6e557b82ec7b-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.476494 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7151fe77-0ddc-43bd-b0bb-6e557b82ec7b","Type":"ContainerDied","Data":"cd2620bc2c9e377a1980b94cd3f09578f81ba6e628e76d32b7a77540d64917b9"} Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.476520 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.476545 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2620bc2c9e377a1980b94cd3f09578f81ba6e628e76d32b7a77540d64917b9" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.478395 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f906dd06-439e-495f-aaf9-fe2cd934ab0e","Type":"ContainerDied","Data":"e53b086b90cae8668f65339b9ee5c189ee239fb6e8d60a72ee793ffb4293682c"} Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.478419 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e53b086b90cae8668f65339b9ee5c189ee239fb6e8d60a72ee793ffb4293682c" Dec 11 08:25:46 crc kubenswrapper[4992]: I1211 08:25:46.478470 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 08:25:47 crc kubenswrapper[4992]: I1211 08:25:47.093491 4992 patch_prober.go:28] interesting pod/router-default-5444994796-mhq4b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 08:25:47 crc kubenswrapper[4992]: [-]has-synced failed: reason withheld Dec 11 08:25:47 crc kubenswrapper[4992]: [+]process-running ok Dec 11 08:25:47 crc kubenswrapper[4992]: healthz check failed Dec 11 08:25:47 crc kubenswrapper[4992]: I1211 08:25:47.093551 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mhq4b" podUID="56f8920d-7c08-4dbe-a3ca-b716ac949eda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 08:25:48 crc kubenswrapper[4992]: I1211 08:25:48.113348 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:48 crc kubenswrapper[4992]: I1211 08:25:48.117784 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mhq4b" Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.181513 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.186701 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.362099 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.362132 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.362154 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.362194 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-h6k7l" Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.362890 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c5bab95ce628c4b39387d1f1ecc9a1b35ce9a795fbc606d9fe3e3226dd11d6dd"} pod="openshift-console/downloads-7954f5f757-h6k7l" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.362974 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" containerID="cri-o://c5bab95ce628c4b39387d1f1ecc9a1b35ce9a795fbc606d9fe3e3226dd11d6dd" gracePeriod=2 Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.362186 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.363296 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:25:53 crc kubenswrapper[4992]: I1211 08:25:53.363320 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:25:54 crc kubenswrapper[4992]: I1211 08:25:54.557843 4992 generic.go:334] "Generic (PLEG): container finished" podID="973372a1-5f38-40b5-8837-bd2236baf511" containerID="c5bab95ce628c4b39387d1f1ecc9a1b35ce9a795fbc606d9fe3e3226dd11d6dd" exitCode=0 Dec 11 08:25:54 crc kubenswrapper[4992]: I1211 08:25:54.557905 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h6k7l" event={"ID":"973372a1-5f38-40b5-8837-bd2236baf511","Type":"ContainerDied","Data":"c5bab95ce628c4b39387d1f1ecc9a1b35ce9a795fbc606d9fe3e3226dd11d6dd"} Dec 11 08:25:58 crc kubenswrapper[4992]: I1211 08:25:58.518820 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:26:03 crc kubenswrapper[4992]: I1211 08:26:03.362465 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:26:03 crc kubenswrapper[4992]: I1211 08:26:03.363270 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:26:03 crc kubenswrapper[4992]: I1211 08:26:03.576340 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 08:26:04 crc kubenswrapper[4992]: I1211 08:26:04.294623 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-7qlnk" Dec 11 08:26:05 crc kubenswrapper[4992]: I1211 08:26:05.379254 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:26:05 crc kubenswrapper[4992]: I1211 08:26:05.379348 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.585517 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 08:26:12 crc kubenswrapper[4992]: E1211 08:26:12.586470 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7151fe77-0ddc-43bd-b0bb-6e557b82ec7b" containerName="pruner" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.586491 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7151fe77-0ddc-43bd-b0bb-6e557b82ec7b" containerName="pruner" Dec 11 08:26:12 crc kubenswrapper[4992]: E1211 08:26:12.586517 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f906dd06-439e-495f-aaf9-fe2cd934ab0e" containerName="pruner" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.586526 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f906dd06-439e-495f-aaf9-fe2cd934ab0e" containerName="pruner" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.586691 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7151fe77-0ddc-43bd-b0bb-6e557b82ec7b" containerName="pruner" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.586717 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f906dd06-439e-495f-aaf9-fe2cd934ab0e" containerName="pruner" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.587408 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.590775 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.590794 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.591943 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.650242 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.650325 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.751759 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.752146 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.751932 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.874370 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:12 crc kubenswrapper[4992]: I1211 08:26:12.909263 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:13 crc kubenswrapper[4992]: I1211 08:26:13.362790 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:26:13 crc kubenswrapper[4992]: I1211 08:26:13.362866 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:26:17 crc kubenswrapper[4992]: I1211 08:26:17.885297 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 08:26:17 crc kubenswrapper[4992]: I1211 08:26:17.887045 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:17 crc kubenswrapper[4992]: I1211 08:26:17.903693 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 08:26:17 crc kubenswrapper[4992]: I1211 08:26:17.935511 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:17 crc kubenswrapper[4992]: I1211 08:26:17.935722 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-var-lock\") pod \"installer-9-crc\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:17 crc kubenswrapper[4992]: I1211 08:26:17.935755 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kube-api-access\") pod \"installer-9-crc\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:18 crc kubenswrapper[4992]: I1211 08:26:18.037181 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:18 crc kubenswrapper[4992]: I1211 08:26:18.037285 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-var-lock\") pod \"installer-9-crc\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:18 crc kubenswrapper[4992]: I1211 08:26:18.037316 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kube-api-access\") pod \"installer-9-crc\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:18 crc kubenswrapper[4992]: I1211 08:26:18.037385 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kubelet-dir\") pod \"installer-9-crc\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:18 crc kubenswrapper[4992]: I1211 08:26:18.037505 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-var-lock\") pod \"installer-9-crc\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:18 crc kubenswrapper[4992]: I1211 08:26:18.062526 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kube-api-access\") pod \"installer-9-crc\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:18 crc kubenswrapper[4992]: I1211 08:26:18.213843 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:26:23 crc kubenswrapper[4992]: I1211 08:26:23.362925 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:26:23 crc kubenswrapper[4992]: I1211 08:26:23.363964 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:26:24 crc kubenswrapper[4992]: E1211 08:26:24.760143 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 08:26:24 crc kubenswrapper[4992]: E1211 08:26:24.761029 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmqzj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mfdbv_openshift-marketplace(fa659f5d-c90f-4aa0-aacb-79889eb26e8e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:26:24 crc kubenswrapper[4992]: E1211 08:26:24.762276 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mfdbv" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" Dec 11 08:26:28 crc kubenswrapper[4992]: E1211 08:26:28.907501 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mfdbv" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" Dec 11 08:26:28 crc kubenswrapper[4992]: E1211 08:26:28.992740 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 08:26:28 crc kubenswrapper[4992]: E1211 08:26:28.992985 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dljmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ccvgp_openshift-marketplace(5a7d128f-32e9-47c5-bac4-6e94898ea0b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:26:28 crc kubenswrapper[4992]: E1211 08:26:28.995570 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ccvgp" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" Dec 11 08:26:30 crc kubenswrapper[4992]: E1211 08:26:30.790285 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ccvgp" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" Dec 11 08:26:30 crc kubenswrapper[4992]: I1211 08:26:30.822247 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gwnxp"] Dec 11 08:26:30 crc kubenswrapper[4992]: E1211 08:26:30.927031 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 08:26:30 crc kubenswrapper[4992]: E1211 08:26:30.927383 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-54zs5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vtvvj_openshift-marketplace(b2ecf8e0-1db5-44c8-84d2-321e753bf872): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:26:30 crc kubenswrapper[4992]: E1211 08:26:30.929103 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vtvvj" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" Dec 11 08:26:33 crc kubenswrapper[4992]: I1211 08:26:33.362778 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:26:33 crc kubenswrapper[4992]: I1211 08:26:33.363135 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:26:34 crc kubenswrapper[4992]: E1211 08:26:34.996157 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vtvvj" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" Dec 11 08:26:35 crc kubenswrapper[4992]: E1211 08:26:35.096397 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 08:26:35 crc kubenswrapper[4992]: E1211 08:26:35.096595 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pm6tn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lkzhk_openshift-marketplace(e27af7be-51b7-40ad-a740-9f9cc14fa328): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:26:35 crc kubenswrapper[4992]: E1211 08:26:35.097815 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lkzhk" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" Dec 11 08:26:35 crc kubenswrapper[4992]: E1211 08:26:35.118498 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 08:26:35 crc kubenswrapper[4992]: E1211 08:26:35.118709 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fvg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zbc6l_openshift-marketplace(4a5362af-66f3-4482-8f2c-2f5748283eac): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:26:35 crc kubenswrapper[4992]: E1211 08:26:35.118774 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 08:26:35 crc kubenswrapper[4992]: E1211 08:26:35.118860 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6gqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dkhhd_openshift-marketplace(4de0775a-dd54-436c-a5ff-fd6782a559a8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:26:35 crc kubenswrapper[4992]: E1211 08:26:35.119960 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-zbc6l" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" Dec 11 08:26:35 crc kubenswrapper[4992]: E1211 08:26:35.119995 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dkhhd" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" Dec 11 08:26:35 crc kubenswrapper[4992]: I1211 08:26:35.378773 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:26:35 crc kubenswrapper[4992]: I1211 08:26:35.378857 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:26:35 crc kubenswrapper[4992]: I1211 08:26:35.378916 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:26:35 crc kubenswrapper[4992]: I1211 08:26:35.379722 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 08:26:35 crc kubenswrapper[4992]: I1211 08:26:35.379797 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2" gracePeriod=600 Dec 11 08:26:35 crc kubenswrapper[4992]: I1211 08:26:35.928575 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2" exitCode=0 Dec 11 08:26:35 crc kubenswrapper[4992]: I1211 08:26:35.929925 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2"} Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.581017 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dkhhd" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.581136 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lkzhk" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.581213 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zbc6l" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.644439 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.645047 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wspxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2s2wb_openshift-marketplace(f06eb87f-2805-4e34-bbb7-86d5ee8d9f37): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.646311 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2s2wb" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.703350 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.703528 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49xh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-c2696_openshift-marketplace(ce3e165d-68f9-42f9-bfca-d08aa820f146): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.704797 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-c2696" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" Dec 11 08:26:36 crc kubenswrapper[4992]: I1211 08:26:36.940343 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"aff7b65415c21f93e15b1dea571d2abe79882b3fb99188a8013f1756d634527d"} Dec 11 08:26:36 crc kubenswrapper[4992]: I1211 08:26:36.942957 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 08:26:36 crc kubenswrapper[4992]: W1211 08:26:36.947845 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda351e36b_2a2b_421b_ae5d_d15ecfcc476d.slice/crio-235957f125f65e9edf4395ca345bcbccedd41a1f33d65e2725873710001a0709 WatchSource:0}: Error finding container 235957f125f65e9edf4395ca345bcbccedd41a1f33d65e2725873710001a0709: Status 404 returned error can't find the container with id 235957f125f65e9edf4395ca345bcbccedd41a1f33d65e2725873710001a0709 Dec 11 08:26:36 crc kubenswrapper[4992]: I1211 08:26:36.948847 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-h6k7l" event={"ID":"973372a1-5f38-40b5-8837-bd2236baf511","Type":"ContainerStarted","Data":"bedacd29a1517cb700013800436b5e4b3bd0f3e89216f61b8501565bc1c5b810"} Dec 11 08:26:36 crc kubenswrapper[4992]: I1211 08:26:36.949684 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:26:36 crc kubenswrapper[4992]: I1211 08:26:36.949759 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.950389 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2s2wb" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" Dec 11 08:26:36 crc kubenswrapper[4992]: E1211 08:26:36.950972 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-c2696" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" Dec 11 08:26:37 crc kubenswrapper[4992]: I1211 08:26:37.101673 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 08:26:37 crc kubenswrapper[4992]: W1211 08:26:37.121247 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod23ca8dad_ba79_4d2f_a891_f2fcf59922af.slice/crio-2f215f478a1df5cf1077a430c4514fc6dd5466117c0b3b592318df55f54d81fd WatchSource:0}: Error finding container 2f215f478a1df5cf1077a430c4514fc6dd5466117c0b3b592318df55f54d81fd: Status 404 returned error can't find the container with id 2f215f478a1df5cf1077a430c4514fc6dd5466117c0b3b592318df55f54d81fd Dec 11 08:26:37 crc kubenswrapper[4992]: I1211 08:26:37.975856 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a351e36b-2a2b-421b-ae5d-d15ecfcc476d","Type":"ContainerStarted","Data":"6a5282f35c6266f7c69d5c863ec0cb3a894e8103cda738ed0833af139d42bd78"} Dec 11 08:26:37 crc kubenswrapper[4992]: I1211 08:26:37.977526 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a351e36b-2a2b-421b-ae5d-d15ecfcc476d","Type":"ContainerStarted","Data":"235957f125f65e9edf4395ca345bcbccedd41a1f33d65e2725873710001a0709"} Dec 11 08:26:37 crc kubenswrapper[4992]: I1211 08:26:37.983724 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23ca8dad-ba79-4d2f-a891-f2fcf59922af","Type":"ContainerStarted","Data":"828ae5dba594e4a0cf90b6383ba7e1011a79913658aa788a1a3b8a013c308d29"} Dec 11 08:26:37 crc kubenswrapper[4992]: I1211 08:26:37.983813 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23ca8dad-ba79-4d2f-a891-f2fcf59922af","Type":"ContainerStarted","Data":"2f215f478a1df5cf1077a430c4514fc6dd5466117c0b3b592318df55f54d81fd"} Dec 11 08:26:37 crc kubenswrapper[4992]: I1211 08:26:37.984763 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-h6k7l" Dec 11 08:26:37 crc kubenswrapper[4992]: I1211 08:26:37.985372 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:26:37 crc kubenswrapper[4992]: I1211 08:26:37.985468 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:26:37 crc kubenswrapper[4992]: I1211 08:26:37.996830 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=25.996807758 podStartE2EDuration="25.996807758s" podCreationTimestamp="2025-12-11 08:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:26:37.991523002 +0000 UTC m=+222.250996928" watchObservedRunningTime="2025-12-11 08:26:37.996807758 +0000 UTC m=+222.256281684" Dec 11 08:26:38 crc kubenswrapper[4992]: I1211 08:26:38.987482 4992 generic.go:334] "Generic (PLEG): container finished" podID="a351e36b-2a2b-421b-ae5d-d15ecfcc476d" containerID="6a5282f35c6266f7c69d5c863ec0cb3a894e8103cda738ed0833af139d42bd78" exitCode=0 Dec 11 08:26:38 crc kubenswrapper[4992]: I1211 08:26:38.987543 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a351e36b-2a2b-421b-ae5d-d15ecfcc476d","Type":"ContainerDied","Data":"6a5282f35c6266f7c69d5c863ec0cb3a894e8103cda738ed0833af139d42bd78"} Dec 11 08:26:38 crc kubenswrapper[4992]: I1211 08:26:38.988785 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:26:38 crc kubenswrapper[4992]: I1211 08:26:38.988858 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:26:39 crc kubenswrapper[4992]: I1211 08:26:39.008013 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=22.00799214 podStartE2EDuration="22.00799214s" podCreationTimestamp="2025-12-11 08:26:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:26:38.037945402 +0000 UTC m=+222.297419328" watchObservedRunningTime="2025-12-11 08:26:39.00799214 +0000 UTC m=+223.267466066" Dec 11 08:26:40 crc kubenswrapper[4992]: I1211 08:26:40.542580 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:40 crc kubenswrapper[4992]: I1211 08:26:40.672310 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kubelet-dir\") pod \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\" (UID: \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\") " Dec 11 08:26:40 crc kubenswrapper[4992]: I1211 08:26:40.672459 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kube-api-access\") pod \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\" (UID: \"a351e36b-2a2b-421b-ae5d-d15ecfcc476d\") " Dec 11 08:26:40 crc kubenswrapper[4992]: I1211 08:26:40.672761 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a351e36b-2a2b-421b-ae5d-d15ecfcc476d" (UID: "a351e36b-2a2b-421b-ae5d-d15ecfcc476d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:26:40 crc kubenswrapper[4992]: I1211 08:26:40.673002 4992 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:26:40 crc kubenswrapper[4992]: I1211 08:26:40.691234 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a351e36b-2a2b-421b-ae5d-d15ecfcc476d" (UID: "a351e36b-2a2b-421b-ae5d-d15ecfcc476d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:26:40 crc kubenswrapper[4992]: I1211 08:26:40.774059 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a351e36b-2a2b-421b-ae5d-d15ecfcc476d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 08:26:41 crc kubenswrapper[4992]: I1211 08:26:41.001924 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a351e36b-2a2b-421b-ae5d-d15ecfcc476d","Type":"ContainerDied","Data":"235957f125f65e9edf4395ca345bcbccedd41a1f33d65e2725873710001a0709"} Dec 11 08:26:41 crc kubenswrapper[4992]: I1211 08:26:41.001982 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235957f125f65e9edf4395ca345bcbccedd41a1f33d65e2725873710001a0709" Dec 11 08:26:41 crc kubenswrapper[4992]: I1211 08:26:41.001996 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 08:26:43 crc kubenswrapper[4992]: I1211 08:26:43.362156 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:26:43 crc kubenswrapper[4992]: I1211 08:26:43.362217 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:26:43 crc kubenswrapper[4992]: I1211 08:26:43.362291 4992 patch_prober.go:28] interesting pod/downloads-7954f5f757-h6k7l container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Dec 11 08:26:43 crc kubenswrapper[4992]: I1211 08:26:43.362320 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-h6k7l" podUID="973372a1-5f38-40b5-8837-bd2236baf511" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Dec 11 08:26:53 crc kubenswrapper[4992]: I1211 08:26:53.380909 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-h6k7l" Dec 11 08:26:55 crc kubenswrapper[4992]: I1211 08:26:55.858902 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" podUID="fe3eedb0-f613-4104-ba42-a22301757402" containerName="oauth-openshift" containerID="cri-o://1dd7d7e2e11e1d6fd34b46af9990b5f248d02cb2192f3b80c16457cf54f33a9e" gracePeriod=15 Dec 11 08:26:58 crc kubenswrapper[4992]: I1211 08:26:58.271011 4992 generic.go:334] "Generic (PLEG): container finished" podID="fe3eedb0-f613-4104-ba42-a22301757402" containerID="1dd7d7e2e11e1d6fd34b46af9990b5f248d02cb2192f3b80c16457cf54f33a9e" exitCode=0 Dec 11 08:26:58 crc kubenswrapper[4992]: I1211 08:26:58.271089 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" event={"ID":"fe3eedb0-f613-4104-ba42-a22301757402","Type":"ContainerDied","Data":"1dd7d7e2e11e1d6fd34b46af9990b5f248d02cb2192f3b80c16457cf54f33a9e"} Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.718439 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.748510 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-54b5c98c4-jqmsr"] Dec 11 08:27:02 crc kubenswrapper[4992]: E1211 08:27:02.757760 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3eedb0-f613-4104-ba42-a22301757402" containerName="oauth-openshift" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.757810 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3eedb0-f613-4104-ba42-a22301757402" containerName="oauth-openshift" Dec 11 08:27:02 crc kubenswrapper[4992]: E1211 08:27:02.757851 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a351e36b-2a2b-421b-ae5d-d15ecfcc476d" containerName="pruner" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.757858 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="a351e36b-2a2b-421b-ae5d-d15ecfcc476d" containerName="pruner" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.758187 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3eedb0-f613-4104-ba42-a22301757402" containerName="oauth-openshift" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.758206 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="a351e36b-2a2b-421b-ae5d-d15ecfcc476d" containerName="pruner" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.758869 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.760004 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54b5c98c4-jqmsr"] Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901127 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-trusted-ca-bundle\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901186 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-login\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901241 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-session\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901268 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe3eedb0-f613-4104-ba42-a22301757402-audit-dir\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901339 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-router-certs\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901380 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-idp-0-file-data\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901423 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-service-ca\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901454 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-error\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901493 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-serving-cert\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901519 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-provider-selection\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901552 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-ocp-branding-template\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901583 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmgsl\" (UniqueName: \"kubernetes.io/projected/fe3eedb0-f613-4104-ba42-a22301757402-kube-api-access-gmgsl\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901622 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-audit-policies\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901669 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-cliconfig\") pod \"fe3eedb0-f613-4104-ba42-a22301757402\" (UID: \"fe3eedb0-f613-4104-ba42-a22301757402\") " Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901859 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-audit-dir\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901893 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-template-login\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901918 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-service-ca\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.901942 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902005 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902037 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-audit-policies\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902075 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902106 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-template-error\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902138 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902165 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-router-certs\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902196 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-session\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902224 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902276 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpxld\" (UniqueName: \"kubernetes.io/projected/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-kube-api-access-xpxld\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902304 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902345 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.902760 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe3eedb0-f613-4104-ba42-a22301757402-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.903100 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.903188 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.903900 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.939537 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3eedb0-f613-4104-ba42-a22301757402-kube-api-access-gmgsl" (OuterVolumeSpecName: "kube-api-access-gmgsl") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "kube-api-access-gmgsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.939673 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.939670 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.939997 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.940401 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.940679 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.940856 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.940982 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:27:02 crc kubenswrapper[4992]: I1211 08:27:02.941324 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fe3eedb0-f613-4104-ba42-a22301757402" (UID: "fe3eedb0-f613-4104-ba42-a22301757402"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003446 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003493 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-audit-dir\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003514 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-template-login\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003531 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-service-ca\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003547 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003591 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003623 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-audit-policies\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003681 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003712 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-template-error\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003746 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003774 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-router-certs\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003805 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-session\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003827 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003871 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpxld\" (UniqueName: \"kubernetes.io/projected/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-kube-api-access-xpxld\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003924 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmgsl\" (UniqueName: \"kubernetes.io/projected/fe3eedb0-f613-4104-ba42-a22301757402-kube-api-access-gmgsl\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003938 4992 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003950 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004367 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004394 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004408 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004422 4992 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fe3eedb0-f613-4104-ba42-a22301757402-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004435 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004452 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004464 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004478 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004494 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004509 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.004523 4992 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fe3eedb0-f613-4104-ba42-a22301757402-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.003593 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-audit-dir\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.005589 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-service-ca\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.006277 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.007588 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-audit-policies\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.008141 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-template-error\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.009240 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-router-certs\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.013748 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-template-login\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.014126 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.014385 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.015603 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.025116 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.025231 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.026833 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-v4-0-config-system-session\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.027026 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpxld\" (UniqueName: \"kubernetes.io/projected/b1f66a2b-51a4-42e1-9c8e-261e881ebbd7-kube-api-access-xpxld\") pod \"oauth-openshift-54b5c98c4-jqmsr\" (UID: \"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7\") " pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.077408 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.303850 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerID="6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad" exitCode=0 Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.303973 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfdbv" event={"ID":"fa659f5d-c90f-4aa0-aacb-79889eb26e8e","Type":"ContainerDied","Data":"6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad"} Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.314058 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccvgp" event={"ID":"5a7d128f-32e9-47c5-bac4-6e94898ea0b7","Type":"ContainerStarted","Data":"55761ba24cca98101fadd1f6ac968b4ead25fbe5d0e7aaeedbfadb39caa0d185"} Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.316844 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" event={"ID":"fe3eedb0-f613-4104-ba42-a22301757402","Type":"ContainerDied","Data":"463ec31f2bfc1bd1e0addcd6054836790cdfba8b5b774a940ea267944e549af6"} Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.316878 4992 scope.go:117] "RemoveContainer" containerID="1dd7d7e2e11e1d6fd34b46af9990b5f248d02cb2192f3b80c16457cf54f33a9e" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.316975 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gwnxp" Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.377325 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gwnxp"] Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.380089 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gwnxp"] Dec 11 08:27:03 crc kubenswrapper[4992]: I1211 08:27:03.486504 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54b5c98c4-jqmsr"] Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.139512 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3eedb0-f613-4104-ba42-a22301757402" path="/var/lib/kubelet/pods/fe3eedb0-f613-4104-ba42-a22301757402/volumes" Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.323270 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2696" event={"ID":"ce3e165d-68f9-42f9-bfca-d08aa820f146","Type":"ContainerStarted","Data":"be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3"} Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.328746 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" event={"ID":"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7","Type":"ContainerStarted","Data":"ab36ba49282993b233d30ed19198395d48a0b58f7dd5659d2fa5ff7f5ada2155"} Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.328804 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" event={"ID":"b1f66a2b-51a4-42e1-9c8e-261e881ebbd7","Type":"ContainerStarted","Data":"0c4bca66d676208436bcb3a4d303ed9d2b3ed6b02d7f5cc8ff9b0fc1042c6dac"} Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.329804 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.345426 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbc6l" event={"ID":"4a5362af-66f3-4482-8f2c-2f5748283eac","Type":"ContainerStarted","Data":"e18f667059d8177093e6117b585ff955ca44428daa6b73b693d26f0c6b738fe8"} Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.349160 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2s2wb" event={"ID":"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37","Type":"ContainerStarted","Data":"e7f68d21e0179417c004412ea063275b2ae373071b752c64ace5a68b50cc6c33"} Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.351227 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtvvj" event={"ID":"b2ecf8e0-1db5-44c8-84d2-321e753bf872","Type":"ContainerStarted","Data":"6017607a07ff29f287029a19debe1358b84656b2e745bb69a086330a79ff55a0"} Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.353183 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzhk" event={"ID":"e27af7be-51b7-40ad-a740-9f9cc14fa328","Type":"ContainerStarted","Data":"0d181d2047116fe42a5f65cc89e4a27924c84910acae42168cda3683d67c7b59"} Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.355559 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkhhd" event={"ID":"4de0775a-dd54-436c-a5ff-fd6782a559a8","Type":"ContainerStarted","Data":"e9985d19e210687d730f300e3c7c951350082bd577e460166618dec4a716cd7a"} Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.487541 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" podStartSLOduration=34.487524431 podStartE2EDuration="34.487524431s" podCreationTimestamp="2025-12-11 08:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:27:04.486583485 +0000 UTC m=+248.746057421" watchObservedRunningTime="2025-12-11 08:27:04.487524431 +0000 UTC m=+248.746998357" Dec 11 08:27:04 crc kubenswrapper[4992]: I1211 08:27:04.912987 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-54b5c98c4-jqmsr" Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.384142 4992 generic.go:334] "Generic (PLEG): container finished" podID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerID="e18f667059d8177093e6117b585ff955ca44428daa6b73b693d26f0c6b738fe8" exitCode=0 Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.384203 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbc6l" event={"ID":"4a5362af-66f3-4482-8f2c-2f5748283eac","Type":"ContainerDied","Data":"e18f667059d8177093e6117b585ff955ca44428daa6b73b693d26f0c6b738fe8"} Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.390250 4992 generic.go:334] "Generic (PLEG): container finished" podID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerID="e7f68d21e0179417c004412ea063275b2ae373071b752c64ace5a68b50cc6c33" exitCode=0 Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.390330 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2s2wb" event={"ID":"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37","Type":"ContainerDied","Data":"e7f68d21e0179417c004412ea063275b2ae373071b752c64ace5a68b50cc6c33"} Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.393776 4992 generic.go:334] "Generic (PLEG): container finished" podID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerID="6017607a07ff29f287029a19debe1358b84656b2e745bb69a086330a79ff55a0" exitCode=0 Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.393835 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtvvj" event={"ID":"b2ecf8e0-1db5-44c8-84d2-321e753bf872","Type":"ContainerDied","Data":"6017607a07ff29f287029a19debe1358b84656b2e745bb69a086330a79ff55a0"} Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.401681 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfdbv" event={"ID":"fa659f5d-c90f-4aa0-aacb-79889eb26e8e","Type":"ContainerStarted","Data":"51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487"} Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.405126 4992 generic.go:334] "Generic (PLEG): container finished" podID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerID="be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3" exitCode=0 Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.405208 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2696" event={"ID":"ce3e165d-68f9-42f9-bfca-d08aa820f146","Type":"ContainerDied","Data":"be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3"} Dec 11 08:27:05 crc kubenswrapper[4992]: I1211 08:27:05.445804 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mfdbv" podStartSLOduration=6.963810367 podStartE2EDuration="1m34.445789376s" podCreationTimestamp="2025-12-11 08:25:31 +0000 UTC" firstStartedPulling="2025-12-11 08:25:36.506485951 +0000 UTC m=+160.765959877" lastFinishedPulling="2025-12-11 08:27:03.98846496 +0000 UTC m=+248.247938886" observedRunningTime="2025-12-11 08:27:05.443354239 +0000 UTC m=+249.702828165" watchObservedRunningTime="2025-12-11 08:27:05.445789376 +0000 UTC m=+249.705263302" Dec 11 08:27:06 crc kubenswrapper[4992]: I1211 08:27:06.417294 4992 generic.go:334] "Generic (PLEG): container finished" podID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerID="e9985d19e210687d730f300e3c7c951350082bd577e460166618dec4a716cd7a" exitCode=0 Dec 11 08:27:06 crc kubenswrapper[4992]: I1211 08:27:06.417341 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkhhd" event={"ID":"4de0775a-dd54-436c-a5ff-fd6782a559a8","Type":"ContainerDied","Data":"e9985d19e210687d730f300e3c7c951350082bd577e460166618dec4a716cd7a"} Dec 11 08:27:06 crc kubenswrapper[4992]: I1211 08:27:06.421014 4992 generic.go:334] "Generic (PLEG): container finished" podID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerID="55761ba24cca98101fadd1f6ac968b4ead25fbe5d0e7aaeedbfadb39caa0d185" exitCode=0 Dec 11 08:27:06 crc kubenswrapper[4992]: I1211 08:27:06.421597 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccvgp" event={"ID":"5a7d128f-32e9-47c5-bac4-6e94898ea0b7","Type":"ContainerDied","Data":"55761ba24cca98101fadd1f6ac968b4ead25fbe5d0e7aaeedbfadb39caa0d185"} Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.427669 4992 generic.go:334] "Generic (PLEG): container finished" podID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerID="0d181d2047116fe42a5f65cc89e4a27924c84910acae42168cda3683d67c7b59" exitCode=0 Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.428028 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzhk" event={"ID":"e27af7be-51b7-40ad-a740-9f9cc14fa328","Type":"ContainerDied","Data":"0d181d2047116fe42a5f65cc89e4a27924c84910acae42168cda3683d67c7b59"} Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.430538 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkhhd" event={"ID":"4de0775a-dd54-436c-a5ff-fd6782a559a8","Type":"ContainerStarted","Data":"15eb2ac41312a2a00fce57fe2c5307be5c72ea7a59a646cd9a946d4d56729e96"} Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.432913 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2696" event={"ID":"ce3e165d-68f9-42f9-bfca-d08aa820f146","Type":"ContainerStarted","Data":"0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7"} Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.435160 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbc6l" event={"ID":"4a5362af-66f3-4482-8f2c-2f5748283eac","Type":"ContainerStarted","Data":"b763351b29fdfe47fd7d86c62800ff54570121abb6f54130d6d5f81628014120"} Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.440112 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccvgp" event={"ID":"5a7d128f-32e9-47c5-bac4-6e94898ea0b7","Type":"ContainerStarted","Data":"f966a07bc016a892f1d2ec08435d72f49df6fed820e5706808c556d16948c1be"} Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.442452 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2s2wb" event={"ID":"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37","Type":"ContainerStarted","Data":"a2eecc3a0bfd8d74dedd5b5d5d470f723743e667547a042e693ed92086b24657"} Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.444429 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtvvj" event={"ID":"b2ecf8e0-1db5-44c8-84d2-321e753bf872","Type":"ContainerStarted","Data":"7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05"} Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.469862 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zbc6l" podStartSLOduration=8.056607855 podStartE2EDuration="1m37.469840818s" podCreationTimestamp="2025-12-11 08:25:30 +0000 UTC" firstStartedPulling="2025-12-11 08:25:36.622724874 +0000 UTC m=+160.882198800" lastFinishedPulling="2025-12-11 08:27:06.035957837 +0000 UTC m=+250.295431763" observedRunningTime="2025-12-11 08:27:07.467032431 +0000 UTC m=+251.726506357" watchObservedRunningTime="2025-12-11 08:27:07.469840818 +0000 UTC m=+251.729314744" Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.486835 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2s2wb" podStartSLOduration=7.190354627 podStartE2EDuration="1m35.486815785s" podCreationTimestamp="2025-12-11 08:25:32 +0000 UTC" firstStartedPulling="2025-12-11 08:25:37.796369847 +0000 UTC m=+162.055843773" lastFinishedPulling="2025-12-11 08:27:06.092830965 +0000 UTC m=+250.352304931" observedRunningTime="2025-12-11 08:27:07.482347793 +0000 UTC m=+251.741821719" watchObservedRunningTime="2025-12-11 08:27:07.486815785 +0000 UTC m=+251.746289711" Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.504854 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2696" podStartSLOduration=7.604397528 podStartE2EDuration="1m35.504835132s" podCreationTimestamp="2025-12-11 08:25:32 +0000 UTC" firstStartedPulling="2025-12-11 08:25:38.044790813 +0000 UTC m=+162.304264739" lastFinishedPulling="2025-12-11 08:27:05.945228417 +0000 UTC m=+250.204702343" observedRunningTime="2025-12-11 08:27:07.50329896 +0000 UTC m=+251.762772886" watchObservedRunningTime="2025-12-11 08:27:07.504835132 +0000 UTC m=+251.764309058" Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.525503 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccvgp" podStartSLOduration=5.086725225 podStartE2EDuration="1m34.525484311s" podCreationTimestamp="2025-12-11 08:25:33 +0000 UTC" firstStartedPulling="2025-12-11 08:25:37.799087839 +0000 UTC m=+162.058561765" lastFinishedPulling="2025-12-11 08:27:07.237846925 +0000 UTC m=+251.497320851" observedRunningTime="2025-12-11 08:27:07.523322711 +0000 UTC m=+251.782796637" watchObservedRunningTime="2025-12-11 08:27:07.525484311 +0000 UTC m=+251.784958237" Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.541237 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vtvvj" podStartSLOduration=7.959421199 podStartE2EDuration="1m37.541214725s" podCreationTimestamp="2025-12-11 08:25:30 +0000 UTC" firstStartedPulling="2025-12-11 08:25:36.473730541 +0000 UTC m=+160.733204467" lastFinishedPulling="2025-12-11 08:27:06.055524077 +0000 UTC m=+250.314997993" observedRunningTime="2025-12-11 08:27:07.537263496 +0000 UTC m=+251.796737432" watchObservedRunningTime="2025-12-11 08:27:07.541214725 +0000 UTC m=+251.800688651" Dec 11 08:27:07 crc kubenswrapper[4992]: I1211 08:27:07.558658 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dkhhd" podStartSLOduration=7.228405923 podStartE2EDuration="1m37.558627645s" podCreationTimestamp="2025-12-11 08:25:30 +0000 UTC" firstStartedPulling="2025-12-11 08:25:36.550050871 +0000 UTC m=+160.809524787" lastFinishedPulling="2025-12-11 08:27:06.880272583 +0000 UTC m=+251.139746509" observedRunningTime="2025-12-11 08:27:07.554547942 +0000 UTC m=+251.814021878" watchObservedRunningTime="2025-12-11 08:27:07.558627645 +0000 UTC m=+251.818101571" Dec 11 08:27:09 crc kubenswrapper[4992]: I1211 08:27:09.466827 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzhk" event={"ID":"e27af7be-51b7-40ad-a740-9f9cc14fa328","Type":"ContainerStarted","Data":"2868de06e555506899add97378530e04066fbfcf61e5f8a04e6b915264ebd31b"} Dec 11 08:27:10 crc kubenswrapper[4992]: I1211 08:27:10.510967 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lkzhk" podStartSLOduration=7.541423749 podStartE2EDuration="1m36.510937294s" podCreationTimestamp="2025-12-11 08:25:34 +0000 UTC" firstStartedPulling="2025-12-11 08:25:39.06346093 +0000 UTC m=+163.322934856" lastFinishedPulling="2025-12-11 08:27:08.032974475 +0000 UTC m=+252.292448401" observedRunningTime="2025-12-11 08:27:10.507130559 +0000 UTC m=+254.766604485" watchObservedRunningTime="2025-12-11 08:27:10.510937294 +0000 UTC m=+254.770411220" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.187210 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.187274 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.308859 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.308924 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.358421 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.394444 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.520146 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.624327 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.624901 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.664071 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.807836 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.807902 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:27:11 crc kubenswrapper[4992]: I1211 08:27:11.848126 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:27:12 crc kubenswrapper[4992]: I1211 08:27:12.558211 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:27:12 crc kubenswrapper[4992]: I1211 08:27:12.570389 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:27:13 crc kubenswrapper[4992]: I1211 08:27:13.304907 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:27:13 crc kubenswrapper[4992]: I1211 08:27:13.304990 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:27:13 crc kubenswrapper[4992]: I1211 08:27:13.347910 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:27:13 crc kubenswrapper[4992]: I1211 08:27:13.562077 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:27:13 crc kubenswrapper[4992]: I1211 08:27:13.566523 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:27:13 crc kubenswrapper[4992]: I1211 08:27:13.566607 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:27:13 crc kubenswrapper[4992]: I1211 08:27:13.612671 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:27:14 crc kubenswrapper[4992]: I1211 08:27:14.400830 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vtvvj"] Dec 11 08:27:14 crc kubenswrapper[4992]: I1211 08:27:14.521369 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vtvvj" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerName="registry-server" containerID="cri-o://7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05" gracePeriod=2 Dec 11 08:27:14 crc kubenswrapper[4992]: I1211 08:27:14.574989 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:27:14 crc kubenswrapper[4992]: I1211 08:27:14.860103 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:27:14 crc kubenswrapper[4992]: I1211 08:27:14.860159 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:27:14 crc kubenswrapper[4992]: I1211 08:27:14.913893 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:27:15 crc kubenswrapper[4992]: E1211 08:27:15.365624 4992 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.367501 4992 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369006 4992 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369176 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369617 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d" gracePeriod=15 Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369724 4992 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369733 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b" gracePeriod=15 Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369845 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b" gracePeriod=15 Dec 11 08:27:15 crc kubenswrapper[4992]: E1211 08:27:15.369910 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369925 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 08:27:15 crc kubenswrapper[4992]: E1211 08:27:15.369936 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369942 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369942 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d" gracePeriod=15 Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.369960 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c" gracePeriod=15 Dec 11 08:27:15 crc kubenswrapper[4992]: E1211 08:27:15.369952 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370032 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 08:27:15 crc kubenswrapper[4992]: E1211 08:27:15.370072 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370082 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 08:27:15 crc kubenswrapper[4992]: E1211 08:27:15.370095 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370103 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 08:27:15 crc kubenswrapper[4992]: E1211 08:27:15.370117 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370125 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370386 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370402 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370411 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370422 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370432 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370443 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 08:27:15 crc kubenswrapper[4992]: E1211 08:27:15.370591 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.370603 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.416025 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.416911 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.416968 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.461521 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.461559 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.461610 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.461675 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.461747 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.461813 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.461845 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.461875 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.462712 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.514875 4992 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.563738 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.563823 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.563868 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.563899 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.563913 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.563985 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.563956 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.564031 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.564055 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.564053 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.564074 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.564113 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.564152 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.564211 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.564240 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.564345 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.580036 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.585122 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:27:15 crc kubenswrapper[4992]: I1211 08:27:15.714628 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:27:15 crc kubenswrapper[4992]: W1211 08:27:15.737984 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-2abd9782378724870bba68bbe629b98cc951e373919540654a03811ff21a5dae WatchSource:0}: Error finding container 2abd9782378724870bba68bbe629b98cc951e373919540654a03811ff21a5dae: Status 404 returned error can't find the container with id 2abd9782378724870bba68bbe629b98cc951e373919540654a03811ff21a5dae Dec 11 08:27:16 crc kubenswrapper[4992]: I1211 08:27:16.199408 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mfdbv"] Dec 11 08:27:16 crc kubenswrapper[4992]: I1211 08:27:16.199757 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mfdbv" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerName="registry-server" containerID="cri-o://51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487" gracePeriod=2 Dec 11 08:27:16 crc kubenswrapper[4992]: I1211 08:27:16.544654 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2abd9782378724870bba68bbe629b98cc951e373919540654a03811ff21a5dae"} Dec 11 08:27:16 crc kubenswrapper[4992]: I1211 08:27:16.800683 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2696"] Dec 11 08:27:16 crc kubenswrapper[4992]: I1211 08:27:16.801084 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2696" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerName="registry-server" containerID="cri-o://0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7" gracePeriod=2 Dec 11 08:27:19 crc kubenswrapper[4992]: I1211 08:27:19.566681 4992 generic.go:334] "Generic (PLEG): container finished" podID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerID="7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05" exitCode=0 Dec 11 08:27:19 crc kubenswrapper[4992]: I1211 08:27:19.566762 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtvvj" event={"ID":"b2ecf8e0-1db5-44c8-84d2-321e753bf872","Type":"ContainerDied","Data":"7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05"} Dec 11 08:27:20 crc kubenswrapper[4992]: I1211 08:27:20.082019 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 08:27:20 crc kubenswrapper[4992]: E1211 08:27:20.104422 4992 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:20 crc kubenswrapper[4992]: E1211 08:27:20.104977 4992 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:20 crc kubenswrapper[4992]: E1211 08:27:20.105316 4992 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:20 crc kubenswrapper[4992]: E1211 08:27:20.105538 4992 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:20 crc kubenswrapper[4992]: E1211 08:27:20.105799 4992 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:20 crc kubenswrapper[4992]: I1211 08:27:20.105844 4992 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 11 08:27:20 crc kubenswrapper[4992]: E1211 08:27:20.106110 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="200ms" Dec 11 08:27:20 crc kubenswrapper[4992]: E1211 08:27:20.308040 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="400ms" Dec 11 08:27:20 crc kubenswrapper[4992]: E1211 08:27:20.710164 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="800ms" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.228010 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.229014 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.419912 4992 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.105:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18801bd1b47b4758 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 08:27:21.41932732 +0000 UTC m=+265.678801246,LastTimestamp:2025-12-11 08:27:21.41932732 +0000 UTC m=+265.678801246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.512058 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="1.6s" Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.625464 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05 is running failed: container process not found" containerID="7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.626556 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05 is running failed: container process not found" containerID="7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.626961 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05 is running failed: container process not found" containerID="7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.627013 4992 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-vtvvj" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerName="registry-server" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.729758 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.730888 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.731151 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.808767 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487 is running failed: container process not found" containerID="51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.809154 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487 is running failed: container process not found" containerID="51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.809819 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487 is running failed: container process not found" containerID="51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:27:21 crc kubenswrapper[4992]: E1211 08:27:21.809868 4992 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-mfdbv" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerName="registry-server" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.858406 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-catalog-content\") pod \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.858481 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-utilities\") pod \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.858576 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54zs5\" (UniqueName: \"kubernetes.io/projected/b2ecf8e0-1db5-44c8-84d2-321e753bf872-kube-api-access-54zs5\") pod \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\" (UID: \"b2ecf8e0-1db5-44c8-84d2-321e753bf872\") " Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.859649 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-utilities" (OuterVolumeSpecName: "utilities") pod "b2ecf8e0-1db5-44c8-84d2-321e753bf872" (UID: "b2ecf8e0-1db5-44c8-84d2-321e753bf872"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.865866 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ecf8e0-1db5-44c8-84d2-321e753bf872-kube-api-access-54zs5" (OuterVolumeSpecName: "kube-api-access-54zs5") pod "b2ecf8e0-1db5-44c8-84d2-321e753bf872" (UID: "b2ecf8e0-1db5-44c8-84d2-321e753bf872"). InnerVolumeSpecName "kube-api-access-54zs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.916839 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.917932 4992 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d" exitCode=0 Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.917967 4992 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b" exitCode=0 Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.917975 4992 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b" exitCode=0 Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.917983 4992 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c" exitCode=2 Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.918028 4992 scope.go:117] "RemoveContainer" containerID="382a1b2c84891fe006c558db47f461262443b4bdff7c94a3b7caff1be761cfa2" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.960887 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54zs5\" (UniqueName: \"kubernetes.io/projected/b2ecf8e0-1db5-44c8-84d2-321e753bf872-kube-api-access-54zs5\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:21 crc kubenswrapper[4992]: I1211 08:27:21.961370 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.374844 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2696_ce3e165d-68f9-42f9-bfca-d08aa820f146/registry-server/0.log" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.375921 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.376714 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.377000 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.377307 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.468327 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-catalog-content\") pod \"ce3e165d-68f9-42f9-bfca-d08aa820f146\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.468432 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49xh2\" (UniqueName: \"kubernetes.io/projected/ce3e165d-68f9-42f9-bfca-d08aa820f146-kube-api-access-49xh2\") pod \"ce3e165d-68f9-42f9-bfca-d08aa820f146\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.468540 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-utilities\") pod \"ce3e165d-68f9-42f9-bfca-d08aa820f146\" (UID: \"ce3e165d-68f9-42f9-bfca-d08aa820f146\") " Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.471877 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-utilities" (OuterVolumeSpecName: "utilities") pod "ce3e165d-68f9-42f9-bfca-d08aa820f146" (UID: "ce3e165d-68f9-42f9-bfca-d08aa820f146"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.479354 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3e165d-68f9-42f9-bfca-d08aa820f146-kube-api-access-49xh2" (OuterVolumeSpecName: "kube-api-access-49xh2") pod "ce3e165d-68f9-42f9-bfca-d08aa820f146" (UID: "ce3e165d-68f9-42f9-bfca-d08aa820f146"). InnerVolumeSpecName "kube-api-access-49xh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.491991 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce3e165d-68f9-42f9-bfca-d08aa820f146" (UID: "ce3e165d-68f9-42f9-bfca-d08aa820f146"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.570583 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.570625 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49xh2\" (UniqueName: \"kubernetes.io/projected/ce3e165d-68f9-42f9-bfca-d08aa820f146-kube-api-access-49xh2\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.570657 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce3e165d-68f9-42f9-bfca-d08aa820f146-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.585076 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mfdbv_fa659f5d-c90f-4aa0-aacb-79889eb26e8e/registry-server/0.log" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.586067 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.586783 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.587134 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.587594 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.587896 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.671310 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-catalog-content\") pod \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.671443 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmqzj\" (UniqueName: \"kubernetes.io/projected/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-kube-api-access-rmqzj\") pod \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.671505 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-utilities\") pod \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\" (UID: \"fa659f5d-c90f-4aa0-aacb-79889eb26e8e\") " Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.672458 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-utilities" (OuterVolumeSpecName: "utilities") pod "fa659f5d-c90f-4aa0-aacb-79889eb26e8e" (UID: "fa659f5d-c90f-4aa0-aacb-79889eb26e8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.677686 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-kube-api-access-rmqzj" (OuterVolumeSpecName: "kube-api-access-rmqzj") pod "fa659f5d-c90f-4aa0-aacb-79889eb26e8e" (UID: "fa659f5d-c90f-4aa0-aacb-79889eb26e8e"). InnerVolumeSpecName "kube-api-access-rmqzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.718691 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa659f5d-c90f-4aa0-aacb-79889eb26e8e" (UID: "fa659f5d-c90f-4aa0-aacb-79889eb26e8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.773602 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.773679 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmqzj\" (UniqueName: \"kubernetes.io/projected/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-kube-api-access-rmqzj\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.773695 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa659f5d-c90f-4aa0-aacb-79889eb26e8e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.924868 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mfdbv_fa659f5d-c90f-4aa0-aacb-79889eb26e8e/registry-server/0.log" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.925880 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerID="51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487" exitCode=137 Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.925921 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfdbv" event={"ID":"fa659f5d-c90f-4aa0-aacb-79889eb26e8e","Type":"ContainerDied","Data":"51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487"} Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.925952 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mfdbv" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.925962 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mfdbv" event={"ID":"fa659f5d-c90f-4aa0-aacb-79889eb26e8e","Type":"ContainerDied","Data":"c3d3f23c3b1449db279d48a8e22ff9ff93eaa11c875f75b6e9e2e8daaf07bb30"} Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.925989 4992 scope.go:117] "RemoveContainer" containerID="51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.926857 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.927026 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.927292 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.927614 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.927813 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-c2696_ce3e165d-68f9-42f9-bfca-d08aa820f146/registry-server/0.log" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.928499 4992 generic.go:334] "Generic (PLEG): container finished" podID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerID="0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7" exitCode=137 Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.928556 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2696" event={"ID":"ce3e165d-68f9-42f9-bfca-d08aa820f146","Type":"ContainerDied","Data":"0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7"} Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.928585 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2696" event={"ID":"ce3e165d-68f9-42f9-bfca-d08aa820f146","Type":"ContainerDied","Data":"2c97c9554c836e41b04a651fe30685120f1ecc12ae29efba5d78f88f8fa405aa"} Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.928775 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2696" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.932112 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.932593 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.932830 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.938972 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.941353 4992 scope.go:117] "RemoveContainer" containerID="6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.941368 4992 generic.go:334] "Generic (PLEG): container finished" podID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" containerID="828ae5dba594e4a0cf90b6383ba7e1011a79913658aa788a1a3b8a013c308d29" exitCode=0 Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.941523 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23ca8dad-ba79-4d2f-a891-f2fcf59922af","Type":"ContainerDied","Data":"828ae5dba594e4a0cf90b6383ba7e1011a79913658aa788a1a3b8a013c308d29"} Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.941962 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.942350 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.943022 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.943293 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.943604 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8d4dddd46e4215c580466c7e9cc7a91c53051518facf7a86d3ece0d1fb40c440"} Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.943924 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.944195 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.944705 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.945203 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.945391 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.945778 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.945970 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.946153 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.946340 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.946554 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.946557 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vtvvj" event={"ID":"b2ecf8e0-1db5-44c8-84d2-321e753bf872","Type":"ContainerDied","Data":"64eb67eba2c331a87d1e0de21468251cb4cb47fd8ff99f1c4abba07bf651ae03"} Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.946979 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vtvvj" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.947430 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.950131 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.951153 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.951398 4992 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d" exitCode=0 Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.951599 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.951970 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.952220 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.952674 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.953362 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.964704 4992 scope.go:117] "RemoveContainer" containerID="bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.968879 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.969328 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.969612 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.969915 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.970177 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.970489 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.979235 4992 scope.go:117] "RemoveContainer" containerID="51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487" Dec 11 08:27:22 crc kubenswrapper[4992]: E1211 08:27:22.979610 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487\": container with ID starting with 51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487 not found: ID does not exist" containerID="51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.979682 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487"} err="failed to get container status \"51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487\": rpc error: code = NotFound desc = could not find container \"51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487\": container with ID starting with 51686c3d1cfc0c46e46fda95c029215060634844361d2d76cf5984818656a487 not found: ID does not exist" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.979708 4992 scope.go:117] "RemoveContainer" containerID="6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad" Dec 11 08:27:22 crc kubenswrapper[4992]: E1211 08:27:22.979934 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad\": container with ID starting with 6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad not found: ID does not exist" containerID="6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.979976 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad"} err="failed to get container status \"6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad\": rpc error: code = NotFound desc = could not find container \"6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad\": container with ID starting with 6b84b895538cacd9a169ed8917efb42200b27626cf4a12e26321bb098bd538ad not found: ID does not exist" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.980006 4992 scope.go:117] "RemoveContainer" containerID="bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937" Dec 11 08:27:22 crc kubenswrapper[4992]: E1211 08:27:22.980328 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937\": container with ID starting with bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937 not found: ID does not exist" containerID="bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.980352 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937"} err="failed to get container status \"bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937\": rpc error: code = NotFound desc = could not find container \"bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937\": container with ID starting with bef898f8efcb8f35aa9498bfde8b947bfa85a3daf9687eb0a3f9246c6d750937 not found: ID does not exist" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.980383 4992 scope.go:117] "RemoveContainer" containerID="0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7" Dec 11 08:27:22 crc kubenswrapper[4992]: I1211 08:27:22.997868 4992 scope.go:117] "RemoveContainer" containerID="be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.010921 4992 scope.go:117] "RemoveContainer" containerID="1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.035366 4992 scope.go:117] "RemoveContainer" containerID="0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7" Dec 11 08:27:23 crc kubenswrapper[4992]: E1211 08:27:23.035802 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7\": container with ID starting with 0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7 not found: ID does not exist" containerID="0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.035848 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7"} err="failed to get container status \"0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7\": rpc error: code = NotFound desc = could not find container \"0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7\": container with ID starting with 0e697d2b71c10a58f662882a032fbf4f458b3ea744f3e967d3c047384a3093a7 not found: ID does not exist" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.035879 4992 scope.go:117] "RemoveContainer" containerID="be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3" Dec 11 08:27:23 crc kubenswrapper[4992]: E1211 08:27:23.036194 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3\": container with ID starting with be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3 not found: ID does not exist" containerID="be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.036247 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3"} err="failed to get container status \"be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3\": rpc error: code = NotFound desc = could not find container \"be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3\": container with ID starting with be139bd2ec0827b1923e095e91581660d26301375e641928d7bcd892bf5679b3 not found: ID does not exist" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.036285 4992 scope.go:117] "RemoveContainer" containerID="1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f" Dec 11 08:27:23 crc kubenswrapper[4992]: E1211 08:27:23.036763 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f\": container with ID starting with 1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f not found: ID does not exist" containerID="1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.036872 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f"} err="failed to get container status \"1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f\": rpc error: code = NotFound desc = could not find container \"1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f\": container with ID starting with 1e1b91fc5bfd1b13ce5355a4485ad8330c17dabb957d91ef3d02f54d3cb9b27f not found: ID does not exist" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.036978 4992 scope.go:117] "RemoveContainer" containerID="7852ead0974cab58ad552ab030c1aaac5e8aca0e8e99447884232e476e25dc05" Dec 11 08:27:23 crc kubenswrapper[4992]: E1211 08:27:23.113393 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="3.2s" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.218497 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2ecf8e0-1db5-44c8-84d2-321e753bf872" (UID: "b2ecf8e0-1db5-44c8-84d2-321e753bf872"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.235136 4992 scope.go:117] "RemoveContainer" containerID="6017607a07ff29f287029a19debe1358b84656b2e745bb69a086330a79ff55a0" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.250412 4992 scope.go:117] "RemoveContainer" containerID="f9cef8b459a326ed06e002ee39169b8a0f1a29c6e1a9546e91c3502822fff0cb" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.263697 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.264090 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.264400 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.264763 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.264992 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.265180 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.281332 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ecf8e0-1db5-44c8-84d2-321e753bf872-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.564043 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.565203 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.565911 4992 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.566606 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.567014 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.567269 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.567477 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.567703 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.568117 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.685737 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.685839 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.685914 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.685933 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.686008 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.686113 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.686297 4992 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.686315 4992 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.686324 4992 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.966489 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.968864 4992 scope.go:117] "RemoveContainer" containerID="88d4fe9922d7f1af1b599a9ed7223e38f193ac6665ee9bab7aa666247119550d" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.969131 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.985183 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.986310 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.986975 4992 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.987510 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.988796 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.989809 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:23 crc kubenswrapper[4992]: I1211 08:27:23.990323 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.246565 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 11 08:27:25 crc kubenswrapper[4992]: E1211 08:27:25.248742 4992 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.154s" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.257375 4992 scope.go:117] "RemoveContainer" containerID="2adf7dd083c2de3577b3ba93c9d3ef0013c23fe5c6eb017a2ee28f91a144fc7b" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.317792 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.318675 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.319192 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.319470 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.319724 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.319942 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.320252 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.320294 4992 scope.go:117] "RemoveContainer" containerID="4d3b136cea27dcf1139b90e1efbe158db92a642d5d737398aa2d379ca827cf0b" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.336180 4992 scope.go:117] "RemoveContainer" containerID="ca57f6881aa815c0df66336936f00758b0dfd48ad2c939c1192a54f4790c6d8c" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.352990 4992 scope.go:117] "RemoveContainer" containerID="72eeacb535741fe69b542d35190b2565e368a63e6888383c6f7791f6d9d7a93d" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.370169 4992 scope.go:117] "RemoveContainer" containerID="dc123fd59b6cf98af3f44da6b512ce389ad91510a044d1991ff2893f9d291230" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.408565 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-var-lock\") pod \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.408737 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kubelet-dir\") pod \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.408868 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kube-api-access\") pod \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\" (UID: \"23ca8dad-ba79-4d2f-a891-f2fcf59922af\") " Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.408851 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-var-lock" (OuterVolumeSpecName: "var-lock") pod "23ca8dad-ba79-4d2f-a891-f2fcf59922af" (UID: "23ca8dad-ba79-4d2f-a891-f2fcf59922af"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.408952 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "23ca8dad-ba79-4d2f-a891-f2fcf59922af" (UID: "23ca8dad-ba79-4d2f-a891-f2fcf59922af"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.409250 4992 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.409281 4992 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.414575 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "23ca8dad-ba79-4d2f-a891-f2fcf59922af" (UID: "23ca8dad-ba79-4d2f-a891-f2fcf59922af"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.510523 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23ca8dad-ba79-4d2f-a891-f2fcf59922af-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.991921 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"23ca8dad-ba79-4d2f-a891-f2fcf59922af","Type":"ContainerDied","Data":"2f215f478a1df5cf1077a430c4514fc6dd5466117c0b3b592318df55f54d81fd"} Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.991974 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f215f478a1df5cf1077a430c4514fc6dd5466117c0b3b592318df55f54d81fd" Dec 11 08:27:25 crc kubenswrapper[4992]: I1211 08:27:25.991990 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.005091 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.005612 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.005930 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.006166 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.006417 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.006621 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.098276 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.098846 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.099057 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.099275 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.099517 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: I1211 08:27:26.099870 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:26 crc kubenswrapper[4992]: E1211 08:27:26.315052 4992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.105:6443: connect: connection refused" interval="6.4s" Dec 11 08:27:27 crc kubenswrapper[4992]: E1211 08:27:27.602502 4992 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.105:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18801bd1b47b4758 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 08:27:21.41932732 +0000 UTC m=+265.678801246,LastTimestamp:2025-12-11 08:27:21.41932732 +0000 UTC m=+265.678801246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.095216 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.098028 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.098626 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.100201 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.100602 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.101208 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.101470 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.112426 4992 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.112483 4992 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:28 crc kubenswrapper[4992]: E1211 08:27:28.113331 4992 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:28 crc kubenswrapper[4992]: I1211 08:27:28.114235 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:28 crc kubenswrapper[4992]: W1211 08:27:28.140490 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-5707c4d402a8993f44d48960f8d786daee7efac20be39abfe218edbae0197153 WatchSource:0}: Error finding container 5707c4d402a8993f44d48960f8d786daee7efac20be39abfe218edbae0197153: Status 404 returned error can't find the container with id 5707c4d402a8993f44d48960f8d786daee7efac20be39abfe218edbae0197153 Dec 11 08:27:28 crc kubenswrapper[4992]: E1211 08:27:28.694141 4992 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-conmon-2004cd15150f8a39c8927a117f6f3d687c0efb8362aa2f50db47d9582a819837.scope\": RecentStats: unable to find data in memory cache]" Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.010437 4992 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="2004cd15150f8a39c8927a117f6f3d687c0efb8362aa2f50db47d9582a819837" exitCode=0 Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.010489 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"2004cd15150f8a39c8927a117f6f3d687c0efb8362aa2f50db47d9582a819837"} Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.010519 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5707c4d402a8993f44d48960f8d786daee7efac20be39abfe218edbae0197153"} Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.010809 4992 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.010823 4992 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:29 crc kubenswrapper[4992]: E1211 08:27:29.011269 4992 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.011407 4992 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.011791 4992 status_manager.go:851] "Failed to get status for pod" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" pod="openshift-marketplace/redhat-marketplace-c2696" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2696\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.012202 4992 status_manager.go:851] "Failed to get status for pod" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" pod="openshift-marketplace/certified-operators-mfdbv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-mfdbv\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.012730 4992 status_manager.go:851] "Failed to get status for pod" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.013001 4992 status_manager.go:851] "Failed to get status for pod" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" pod="openshift-marketplace/community-operators-dkhhd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dkhhd\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:29 crc kubenswrapper[4992]: I1211 08:27:29.013279 4992 status_manager.go:851] "Failed to get status for pod" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" pod="openshift-marketplace/community-operators-vtvvj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vtvvj\": dial tcp 38.129.56.105:6443: connect: connection refused" Dec 11 08:27:30 crc kubenswrapper[4992]: I1211 08:27:30.017553 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"06969a4b58dea31ce59a963dcf98c6a927bf02527665f0815bf2675a7aed7d0e"} Dec 11 08:27:30 crc kubenswrapper[4992]: I1211 08:27:30.018081 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ab91b98955ae76d68768ce006476b62e97d551e7edb52f01213f36db93f33686"} Dec 11 08:27:31 crc kubenswrapper[4992]: I1211 08:27:31.030146 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d3f1076d6cde616b088fac9d4d493357d69b4cd3209b6bd20aef59e33db03381"} Dec 11 08:27:31 crc kubenswrapper[4992]: I1211 08:27:31.030204 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"07b6c59bd245958a52d0a910812a59ce082418ce80b2af4c3d89df3e68fc191d"} Dec 11 08:27:31 crc kubenswrapper[4992]: I1211 08:27:31.030222 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4fed7d6bea728d3fe0aeb309b660a68bf42afd9aec23445d08aa0228787b5968"} Dec 11 08:27:31 crc kubenswrapper[4992]: I1211 08:27:31.030388 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:31 crc kubenswrapper[4992]: I1211 08:27:31.030508 4992 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:31 crc kubenswrapper[4992]: I1211 08:27:31.030526 4992 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:33 crc kubenswrapper[4992]: I1211 08:27:33.115298 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:33 crc kubenswrapper[4992]: I1211 08:27:33.115872 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:33 crc kubenswrapper[4992]: I1211 08:27:33.121253 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:36 crc kubenswrapper[4992]: I1211 08:27:36.060191 4992 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:36 crc kubenswrapper[4992]: I1211 08:27:36.230586 4992 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="127c4f4e-8ba1-48ef-987d-942f333d9399" Dec 11 08:27:36 crc kubenswrapper[4992]: I1211 08:27:36.922251 4992 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 08:27:36 crc kubenswrapper[4992]: I1211 08:27:36.922326 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.070151 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.070226 4992 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67" exitCode=1 Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.070386 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67"} Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.070580 4992 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.070608 4992 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.071449 4992 scope.go:117] "RemoveContainer" containerID="f8a5b87a177c983f593726a10cb18d20b7ca0ffefca3c296afdef8e0abca4e67" Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.075304 4992 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="127c4f4e-8ba1-48ef-987d-942f333d9399" Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.077979 4992 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://ab91b98955ae76d68768ce006476b62e97d551e7edb52f01213f36db93f33686" Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.078069 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:37 crc kubenswrapper[4992]: I1211 08:27:37.352981 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:27:38 crc kubenswrapper[4992]: I1211 08:27:38.081710 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 08:27:38 crc kubenswrapper[4992]: I1211 08:27:38.081877 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6e0c373b5ee6b0e0a95e2b3c19e6df1f1de722d3ef3a64a3d0501e2460d43e94"} Dec 11 08:27:38 crc kubenswrapper[4992]: I1211 08:27:38.082155 4992 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:38 crc kubenswrapper[4992]: I1211 08:27:38.082176 4992 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2f965562-76bd-4e1c-bc05-7483ad9e773d" Dec 11 08:27:38 crc kubenswrapper[4992]: I1211 08:27:38.110234 4992 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="127c4f4e-8ba1-48ef-987d-942f333d9399" Dec 11 08:27:42 crc kubenswrapper[4992]: I1211 08:27:42.449230 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 08:27:42 crc kubenswrapper[4992]: I1211 08:27:42.518480 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 08:27:42 crc kubenswrapper[4992]: I1211 08:27:42.584342 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 08:27:42 crc kubenswrapper[4992]: I1211 08:27:42.773276 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 08:27:42 crc kubenswrapper[4992]: I1211 08:27:42.800976 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 08:27:42 crc kubenswrapper[4992]: I1211 08:27:42.929433 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 08:27:43 crc kubenswrapper[4992]: I1211 08:27:43.027247 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 08:27:43 crc kubenswrapper[4992]: I1211 08:27:43.136876 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 08:27:43 crc kubenswrapper[4992]: I1211 08:27:43.621752 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 08:27:43 crc kubenswrapper[4992]: I1211 08:27:43.642408 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 08:27:43 crc kubenswrapper[4992]: I1211 08:27:43.689482 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 08:27:43 crc kubenswrapper[4992]: I1211 08:27:43.810511 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 08:27:43 crc kubenswrapper[4992]: I1211 08:27:43.843374 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 08:27:43 crc kubenswrapper[4992]: I1211 08:27:43.926604 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 08:27:44 crc kubenswrapper[4992]: I1211 08:27:44.021356 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 08:27:44 crc kubenswrapper[4992]: I1211 08:27:44.235113 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 08:27:44 crc kubenswrapper[4992]: I1211 08:27:44.334459 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 08:27:44 crc kubenswrapper[4992]: I1211 08:27:44.366287 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 08:27:44 crc kubenswrapper[4992]: I1211 08:27:44.648176 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 08:27:44 crc kubenswrapper[4992]: I1211 08:27:44.685710 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 08:27:44 crc kubenswrapper[4992]: I1211 08:27:44.861898 4992 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 08:27:44 crc kubenswrapper[4992]: I1211 08:27:44.885368 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.025616 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.027794 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.069593 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.131465 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.391990 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.409999 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.410691 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.507528 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.525323 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.549413 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.559170 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.571929 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.613358 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.687407 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.698966 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.760605 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.777238 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.833066 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.945842 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 08:27:45 crc kubenswrapper[4992]: I1211 08:27:45.993213 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.046429 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.107823 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.121534 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.273798 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.348310 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.354168 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.428640 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.451080 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.532432 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.714508 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.758228 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.798504 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.890950 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.894496 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.921457 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:27:46 crc kubenswrapper[4992]: I1211 08:27:46.953736 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.205070 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.353119 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.356720 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.447526 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.475449 4992 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.534108 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.566625 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.736058 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.936383 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.994445 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 08:27:47 crc kubenswrapper[4992]: I1211 08:27:47.997477 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.133409 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.168935 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.290194 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.312860 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.378032 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.448912 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.485428 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.553048 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.578378 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 08:27:48 crc kubenswrapper[4992]: I1211 08:27:48.662754 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 08:27:49 crc kubenswrapper[4992]: I1211 08:27:49.065043 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 08:27:49 crc kubenswrapper[4992]: I1211 08:27:49.331343 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 08:27:49 crc kubenswrapper[4992]: I1211 08:27:49.472262 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 08:27:49 crc kubenswrapper[4992]: I1211 08:27:49.687134 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 08:27:49 crc kubenswrapper[4992]: I1211 08:27:49.795503 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 08:27:50 crc kubenswrapper[4992]: I1211 08:27:50.225517 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 08:27:50 crc kubenswrapper[4992]: I1211 08:27:50.339487 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 08:27:50 crc kubenswrapper[4992]: I1211 08:27:50.440681 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 08:27:50 crc kubenswrapper[4992]: I1211 08:27:50.848254 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 08:27:51 crc kubenswrapper[4992]: I1211 08:27:51.168242 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 08:27:51 crc kubenswrapper[4992]: I1211 08:27:51.303565 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 08:27:51 crc kubenswrapper[4992]: I1211 08:27:51.390627 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 08:27:51 crc kubenswrapper[4992]: I1211 08:27:51.411618 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 08:27:51 crc kubenswrapper[4992]: I1211 08:27:51.418704 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 08:27:51 crc kubenswrapper[4992]: I1211 08:27:51.648945 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 08:27:51 crc kubenswrapper[4992]: I1211 08:27:51.794321 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.027660 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.078611 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.156605 4992 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.217014 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.237835 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.264919 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.594594 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.609911 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.613074 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.727506 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 08:27:52 crc kubenswrapper[4992]: I1211 08:27:52.887201 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.102723 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.256067 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.268465 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.272108 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.278216 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.296045 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.343011 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.400848 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.491087 4992 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.495724 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.526699 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.621787 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.641862 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.670072 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.699125 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.708879 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.732157 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.797446 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.820594 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.840860 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 08:27:53 crc kubenswrapper[4992]: I1211 08:27:53.885727 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.010065 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.091466 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.110351 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.257957 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.316656 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.348622 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.456468 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.508445 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.537300 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.680127 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.707605 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.732242 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.759691 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.760959 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 08:27:54 crc kubenswrapper[4992]: I1211 08:27:54.875175 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.193264 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.321303 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.388127 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.525000 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.526508 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.580489 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.658071 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.664566 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.813682 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.823577 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.919852 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 08:27:55 crc kubenswrapper[4992]: I1211 08:27:55.975347 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.103160 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.134783 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.136316 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.136909 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.148138 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.154103 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.179134 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.320102 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.538988 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.541302 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.666424 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.705613 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.842372 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.863152 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 08:27:56 crc kubenswrapper[4992]: I1211 08:27:56.872124 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.187239 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.196215 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.412523 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.549263 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.591261 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.601738 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.637613 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.646575 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.724756 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.785338 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.810366 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 08:27:57 crc kubenswrapper[4992]: I1211 08:27:57.864401 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 08:27:58 crc kubenswrapper[4992]: I1211 08:27:58.023376 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 08:27:58 crc kubenswrapper[4992]: I1211 08:27:58.059821 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 08:27:58 crc kubenswrapper[4992]: I1211 08:27:58.067069 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 08:27:58 crc kubenswrapper[4992]: I1211 08:27:58.152884 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 08:27:58 crc kubenswrapper[4992]: I1211 08:27:58.275992 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 08:27:58 crc kubenswrapper[4992]: I1211 08:27:58.435627 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 08:27:58 crc kubenswrapper[4992]: I1211 08:27:58.483833 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 08:27:58 crc kubenswrapper[4992]: I1211 08:27:58.507078 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 08:27:58 crc kubenswrapper[4992]: I1211 08:27:58.989490 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.048476 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.051142 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.071669 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.127097 4992 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.133619 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.143701 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.155032 4992 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.161389 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.161360608 podStartE2EDuration="44.161360608s" podCreationTimestamp="2025-12-11 08:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:27:36.131838788 +0000 UTC m=+280.391312714" watchObservedRunningTime="2025-12-11 08:27:59.161360608 +0000 UTC m=+303.420834564" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.163575 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vtvvj","openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/redhat-marketplace-c2696","openshift-marketplace/certified-operators-mfdbv"] Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.163839 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.168344 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.185440 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.185419372 podStartE2EDuration="23.185419372s" podCreationTimestamp="2025-12-11 08:27:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:27:59.18236671 +0000 UTC m=+303.441840676" watchObservedRunningTime="2025-12-11 08:27:59.185419372 +0000 UTC m=+303.444893288" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.265104 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.304971 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.320566 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.353852 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.368668 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.533312 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.611196 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.710340 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.755674 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.903196 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.914398 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 08:27:59 crc kubenswrapper[4992]: I1211 08:27:59.941497 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.021501 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.058413 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.102095 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" path="/var/lib/kubelet/pods/b2ecf8e0-1db5-44c8-84d2-321e753bf872/volumes" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.103245 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" path="/var/lib/kubelet/pods/ce3e165d-68f9-42f9-bfca-d08aa820f146/volumes" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.104383 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" path="/var/lib/kubelet/pods/fa659f5d-c90f-4aa0-aacb-79889eb26e8e/volumes" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.194517 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.200696 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.452896 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.526234 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.537244 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.616154 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.675858 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.729694 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.777447 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 08:28:00 crc kubenswrapper[4992]: I1211 08:28:00.934790 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.030035 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.056921 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.099812 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.104999 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.193264 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.225840 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.239140 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.348232 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.348277 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.483291 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.712657 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.742887 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.772897 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.878709 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 08:28:01 crc kubenswrapper[4992]: I1211 08:28:01.980284 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 08:28:02 crc kubenswrapper[4992]: I1211 08:28:02.153613 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 08:28:02 crc kubenswrapper[4992]: I1211 08:28:02.316091 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 08:28:02 crc kubenswrapper[4992]: I1211 08:28:02.479251 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 08:28:02 crc kubenswrapper[4992]: I1211 08:28:02.561006 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 08:28:02 crc kubenswrapper[4992]: I1211 08:28:02.903293 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 08:28:02 crc kubenswrapper[4992]: I1211 08:28:02.925906 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.017325 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.210321 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.229480 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.314442 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.408826 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.410871 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.425500 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.498203 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.522959 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.558003 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 08:28:03 crc kubenswrapper[4992]: I1211 08:28:03.988889 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 08:28:04 crc kubenswrapper[4992]: I1211 08:28:04.118366 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 08:28:04 crc kubenswrapper[4992]: I1211 08:28:04.282010 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 08:28:05 crc kubenswrapper[4992]: I1211 08:28:05.983847 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 08:28:08 crc kubenswrapper[4992]: I1211 08:28:08.872564 4992 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 08:28:08 crc kubenswrapper[4992]: I1211 08:28:08.873214 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://8d4dddd46e4215c580466c7e9cc7a91c53051518facf7a86d3ece0d1fb40c440" gracePeriod=5 Dec 11 08:28:11 crc kubenswrapper[4992]: I1211 08:28:11.964684 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zbc6l"] Dec 11 08:28:11 crc kubenswrapper[4992]: I1211 08:28:11.968402 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zbc6l" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerName="registry-server" containerID="cri-o://b763351b29fdfe47fd7d86c62800ff54570121abb6f54130d6d5f81628014120" gracePeriod=30 Dec 11 08:28:11 crc kubenswrapper[4992]: I1211 08:28:11.990279 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dkhhd"] Dec 11 08:28:11 crc kubenswrapper[4992]: I1211 08:28:11.990597 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dkhhd" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerName="registry-server" containerID="cri-o://15eb2ac41312a2a00fce57fe2c5307be5c72ea7a59a646cd9a946d4d56729e96" gracePeriod=30 Dec 11 08:28:11 crc kubenswrapper[4992]: I1211 08:28:11.994916 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4zftz"] Dec 11 08:28:11 crc kubenswrapper[4992]: I1211 08:28:11.995137 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" podUID="07389d03-2315-4483-b6bc-c25d2fb69f53" containerName="marketplace-operator" containerID="cri-o://ebcfcf0491d4f1e3b26815bb950a9a6f7db4feedd645f1ebac40a484c7a007ba" gracePeriod=30 Dec 11 08:28:11 crc kubenswrapper[4992]: I1211 08:28:11.999024 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2s2wb"] Dec 11 08:28:11 crc kubenswrapper[4992]: I1211 08:28:11.999382 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2s2wb" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerName="registry-server" containerID="cri-o://a2eecc3a0bfd8d74dedd5b5d5d470f723743e667547a042e693ed92086b24657" gracePeriod=30 Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.005438 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccvgp"] Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.005816 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ccvgp" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerName="registry-server" containerID="cri-o://f966a07bc016a892f1d2ec08435d72f49df6fed820e5706808c556d16948c1be" gracePeriod=30 Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.021713 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkzhk"] Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.022066 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lkzhk" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerName="registry-server" containerID="cri-o://2868de06e555506899add97378530e04066fbfcf61e5f8a04e6b915264ebd31b" gracePeriod=30 Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.025842 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flz8s"] Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026154 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerName="extract-utilities" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026166 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerName="extract-utilities" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026177 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerName="extract-utilities" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026183 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerName="extract-utilities" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026210 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerName="registry-server" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026218 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerName="registry-server" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026227 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerName="extract-content" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026232 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerName="extract-content" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026241 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerName="extract-content" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026247 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerName="extract-content" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026263 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" containerName="installer" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026286 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" containerName="installer" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026299 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerName="registry-server" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026305 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerName="registry-server" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026314 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026319 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026328 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerName="extract-content" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026333 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerName="extract-content" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026343 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerName="extract-utilities" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026368 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerName="extract-utilities" Dec 11 08:28:12 crc kubenswrapper[4992]: E1211 08:28:12.026375 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerName="registry-server" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026380 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerName="registry-server" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026531 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa659f5d-c90f-4aa0-aacb-79889eb26e8e" containerName="registry-server" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026544 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ecf8e0-1db5-44c8-84d2-321e753bf872" containerName="registry-server" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026551 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3e165d-68f9-42f9-bfca-d08aa820f146" containerName="registry-server" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026558 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ca8dad-ba79-4d2f-a891-f2fcf59922af" containerName="installer" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.026567 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.032079 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.045766 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3229bda-a4f4-42ac-8936-829ab828fce4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flz8s\" (UID: \"f3229bda-a4f4-42ac-8936-829ab828fce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.045887 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwqhs\" (UniqueName: \"kubernetes.io/projected/f3229bda-a4f4-42ac-8936-829ab828fce4-kube-api-access-xwqhs\") pod \"marketplace-operator-79b997595-flz8s\" (UID: \"f3229bda-a4f4-42ac-8936-829ab828fce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.045956 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3229bda-a4f4-42ac-8936-829ab828fce4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flz8s\" (UID: \"f3229bda-a4f4-42ac-8936-829ab828fce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.051934 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flz8s"] Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.148393 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwqhs\" (UniqueName: \"kubernetes.io/projected/f3229bda-a4f4-42ac-8936-829ab828fce4-kube-api-access-xwqhs\") pod \"marketplace-operator-79b997595-flz8s\" (UID: \"f3229bda-a4f4-42ac-8936-829ab828fce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.148458 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3229bda-a4f4-42ac-8936-829ab828fce4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flz8s\" (UID: \"f3229bda-a4f4-42ac-8936-829ab828fce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.148563 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3229bda-a4f4-42ac-8936-829ab828fce4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flz8s\" (UID: \"f3229bda-a4f4-42ac-8936-829ab828fce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.150084 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3229bda-a4f4-42ac-8936-829ab828fce4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-flz8s\" (UID: \"f3229bda-a4f4-42ac-8936-829ab828fce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.156476 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3229bda-a4f4-42ac-8936-829ab828fce4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-flz8s\" (UID: \"f3229bda-a4f4-42ac-8936-829ab828fce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.172049 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwqhs\" (UniqueName: \"kubernetes.io/projected/f3229bda-a4f4-42ac-8936-829ab828fce4-kube-api-access-xwqhs\") pod \"marketplace-operator-79b997595-flz8s\" (UID: \"f3229bda-a4f4-42ac-8936-829ab828fce4\") " pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.293488 4992 generic.go:334] "Generic (PLEG): container finished" podID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerID="f966a07bc016a892f1d2ec08435d72f49df6fed820e5706808c556d16948c1be" exitCode=0 Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.293559 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccvgp" event={"ID":"5a7d128f-32e9-47c5-bac4-6e94898ea0b7","Type":"ContainerDied","Data":"f966a07bc016a892f1d2ec08435d72f49df6fed820e5706808c556d16948c1be"} Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.295702 4992 generic.go:334] "Generic (PLEG): container finished" podID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerID="a2eecc3a0bfd8d74dedd5b5d5d470f723743e667547a042e693ed92086b24657" exitCode=0 Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.295753 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2s2wb" event={"ID":"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37","Type":"ContainerDied","Data":"a2eecc3a0bfd8d74dedd5b5d5d470f723743e667547a042e693ed92086b24657"} Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.297690 4992 generic.go:334] "Generic (PLEG): container finished" podID="07389d03-2315-4483-b6bc-c25d2fb69f53" containerID="ebcfcf0491d4f1e3b26815bb950a9a6f7db4feedd645f1ebac40a484c7a007ba" exitCode=0 Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.297766 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" event={"ID":"07389d03-2315-4483-b6bc-c25d2fb69f53","Type":"ContainerDied","Data":"ebcfcf0491d4f1e3b26815bb950a9a6f7db4feedd645f1ebac40a484c7a007ba"} Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.299514 4992 generic.go:334] "Generic (PLEG): container finished" podID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerID="2868de06e555506899add97378530e04066fbfcf61e5f8a04e6b915264ebd31b" exitCode=0 Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.299550 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzhk" event={"ID":"e27af7be-51b7-40ad-a740-9f9cc14fa328","Type":"ContainerDied","Data":"2868de06e555506899add97378530e04066fbfcf61e5f8a04e6b915264ebd31b"} Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.302370 4992 generic.go:334] "Generic (PLEG): container finished" podID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerID="15eb2ac41312a2a00fce57fe2c5307be5c72ea7a59a646cd9a946d4d56729e96" exitCode=0 Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.302540 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkhhd" event={"ID":"4de0775a-dd54-436c-a5ff-fd6782a559a8","Type":"ContainerDied","Data":"15eb2ac41312a2a00fce57fe2c5307be5c72ea7a59a646cd9a946d4d56729e96"} Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.303979 4992 generic.go:334] "Generic (PLEG): container finished" podID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerID="b763351b29fdfe47fd7d86c62800ff54570121abb6f54130d6d5f81628014120" exitCode=0 Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.304010 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbc6l" event={"ID":"4a5362af-66f3-4482-8f2c-2f5748283eac","Type":"ContainerDied","Data":"b763351b29fdfe47fd7d86c62800ff54570121abb6f54130d6d5f81628014120"} Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.462440 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.466045 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.472196 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.481035 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.493087 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.493599 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.499949 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552083 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6gqn\" (UniqueName: \"kubernetes.io/projected/4de0775a-dd54-436c-a5ff-fd6782a559a8-kube-api-access-h6gqn\") pod \"4de0775a-dd54-436c-a5ff-fd6782a559a8\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552136 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-utilities\") pod \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552157 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-utilities\") pod \"4a5362af-66f3-4482-8f2c-2f5748283eac\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552181 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-trusted-ca\") pod \"07389d03-2315-4483-b6bc-c25d2fb69f53\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552204 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dljmd\" (UniqueName: \"kubernetes.io/projected/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-kube-api-access-dljmd\") pod \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552235 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-catalog-content\") pod \"4de0775a-dd54-436c-a5ff-fd6782a559a8\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552260 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wspxx\" (UniqueName: \"kubernetes.io/projected/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-kube-api-access-wspxx\") pod \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552278 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-catalog-content\") pod \"4a5362af-66f3-4482-8f2c-2f5748283eac\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552294 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-catalog-content\") pod \"e27af7be-51b7-40ad-a740-9f9cc14fa328\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552312 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm6tn\" (UniqueName: \"kubernetes.io/projected/e27af7be-51b7-40ad-a740-9f9cc14fa328-kube-api-access-pm6tn\") pod \"e27af7be-51b7-40ad-a740-9f9cc14fa328\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552328 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tz2h\" (UniqueName: \"kubernetes.io/projected/07389d03-2315-4483-b6bc-c25d2fb69f53-kube-api-access-9tz2h\") pod \"07389d03-2315-4483-b6bc-c25d2fb69f53\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552348 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-operator-metrics\") pod \"07389d03-2315-4483-b6bc-c25d2fb69f53\" (UID: \"07389d03-2315-4483-b6bc-c25d2fb69f53\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552363 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-catalog-content\") pod \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\" (UID: \"5a7d128f-32e9-47c5-bac4-6e94898ea0b7\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552384 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-utilities\") pod \"e27af7be-51b7-40ad-a740-9f9cc14fa328\" (UID: \"e27af7be-51b7-40ad-a740-9f9cc14fa328\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552402 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-utilities\") pod \"4de0775a-dd54-436c-a5ff-fd6782a559a8\" (UID: \"4de0775a-dd54-436c-a5ff-fd6782a559a8\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552421 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fvg6\" (UniqueName: \"kubernetes.io/projected/4a5362af-66f3-4482-8f2c-2f5748283eac-kube-api-access-5fvg6\") pod \"4a5362af-66f3-4482-8f2c-2f5748283eac\" (UID: \"4a5362af-66f3-4482-8f2c-2f5748283eac\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552437 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-utilities\") pod \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.552458 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-catalog-content\") pod \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\" (UID: \"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37\") " Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.555545 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-utilities" (OuterVolumeSpecName: "utilities") pod "f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" (UID: "f06eb87f-2805-4e34-bbb7-86d5ee8d9f37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.555710 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-utilities" (OuterVolumeSpecName: "utilities") pod "4a5362af-66f3-4482-8f2c-2f5748283eac" (UID: "4a5362af-66f3-4482-8f2c-2f5748283eac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.556124 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-utilities" (OuterVolumeSpecName: "utilities") pod "4de0775a-dd54-436c-a5ff-fd6782a559a8" (UID: "4de0775a-dd54-436c-a5ff-fd6782a559a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.556135 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-kube-api-access-wspxx" (OuterVolumeSpecName: "kube-api-access-wspxx") pod "f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" (UID: "f06eb87f-2805-4e34-bbb7-86d5ee8d9f37"). InnerVolumeSpecName "kube-api-access-wspxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.556380 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-utilities" (OuterVolumeSpecName: "utilities") pod "5a7d128f-32e9-47c5-bac4-6e94898ea0b7" (UID: "5a7d128f-32e9-47c5-bac4-6e94898ea0b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.558676 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "07389d03-2315-4483-b6bc-c25d2fb69f53" (UID: "07389d03-2315-4483-b6bc-c25d2fb69f53"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.559024 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27af7be-51b7-40ad-a740-9f9cc14fa328-kube-api-access-pm6tn" (OuterVolumeSpecName: "kube-api-access-pm6tn") pod "e27af7be-51b7-40ad-a740-9f9cc14fa328" (UID: "e27af7be-51b7-40ad-a740-9f9cc14fa328"). InnerVolumeSpecName "kube-api-access-pm6tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.560310 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07389d03-2315-4483-b6bc-c25d2fb69f53-kube-api-access-9tz2h" (OuterVolumeSpecName: "kube-api-access-9tz2h") pod "07389d03-2315-4483-b6bc-c25d2fb69f53" (UID: "07389d03-2315-4483-b6bc-c25d2fb69f53"). InnerVolumeSpecName "kube-api-access-9tz2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.561309 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "07389d03-2315-4483-b6bc-c25d2fb69f53" (UID: "07389d03-2315-4483-b6bc-c25d2fb69f53"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.562030 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a5362af-66f3-4482-8f2c-2f5748283eac-kube-api-access-5fvg6" (OuterVolumeSpecName: "kube-api-access-5fvg6") pod "4a5362af-66f3-4482-8f2c-2f5748283eac" (UID: "4a5362af-66f3-4482-8f2c-2f5748283eac"). InnerVolumeSpecName "kube-api-access-5fvg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.565006 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-utilities" (OuterVolumeSpecName: "utilities") pod "e27af7be-51b7-40ad-a740-9f9cc14fa328" (UID: "e27af7be-51b7-40ad-a740-9f9cc14fa328"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.575906 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-kube-api-access-dljmd" (OuterVolumeSpecName: "kube-api-access-dljmd") pod "5a7d128f-32e9-47c5-bac4-6e94898ea0b7" (UID: "5a7d128f-32e9-47c5-bac4-6e94898ea0b7"). InnerVolumeSpecName "kube-api-access-dljmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.584238 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de0775a-dd54-436c-a5ff-fd6782a559a8-kube-api-access-h6gqn" (OuterVolumeSpecName: "kube-api-access-h6gqn") pod "4de0775a-dd54-436c-a5ff-fd6782a559a8" (UID: "4de0775a-dd54-436c-a5ff-fd6782a559a8"). InnerVolumeSpecName "kube-api-access-h6gqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.609169 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" (UID: "f06eb87f-2805-4e34-bbb7-86d5ee8d9f37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.649648 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4a5362af-66f3-4482-8f2c-2f5748283eac" (UID: "4a5362af-66f3-4482-8f2c-2f5748283eac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.650242 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4de0775a-dd54-436c-a5ff-fd6782a559a8" (UID: "4de0775a-dd54-436c-a5ff-fd6782a559a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653655 4992 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653687 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653699 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653709 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fvg6\" (UniqueName: \"kubernetes.io/projected/4a5362af-66f3-4482-8f2c-2f5748283eac-kube-api-access-5fvg6\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653718 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653727 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653735 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6gqn\" (UniqueName: \"kubernetes.io/projected/4de0775a-dd54-436c-a5ff-fd6782a559a8-kube-api-access-h6gqn\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653743 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653751 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653759 4992 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/07389d03-2315-4483-b6bc-c25d2fb69f53-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653767 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dljmd\" (UniqueName: \"kubernetes.io/projected/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-kube-api-access-dljmd\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653776 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de0775a-dd54-436c-a5ff-fd6782a559a8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653784 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a5362af-66f3-4482-8f2c-2f5748283eac-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653796 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wspxx\" (UniqueName: \"kubernetes.io/projected/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37-kube-api-access-wspxx\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653805 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm6tn\" (UniqueName: \"kubernetes.io/projected/e27af7be-51b7-40ad-a740-9f9cc14fa328-kube-api-access-pm6tn\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.653814 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tz2h\" (UniqueName: \"kubernetes.io/projected/07389d03-2315-4483-b6bc-c25d2fb69f53-kube-api-access-9tz2h\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.683102 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-flz8s"] Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.712097 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e27af7be-51b7-40ad-a740-9f9cc14fa328" (UID: "e27af7be-51b7-40ad-a740-9f9cc14fa328"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.721728 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a7d128f-32e9-47c5-bac4-6e94898ea0b7" (UID: "5a7d128f-32e9-47c5-bac4-6e94898ea0b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.754696 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af7be-51b7-40ad-a740-9f9cc14fa328-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:12 crc kubenswrapper[4992]: I1211 08:28:12.754990 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a7d128f-32e9-47c5-bac4-6e94898ea0b7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.311993 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zbc6l" event={"ID":"4a5362af-66f3-4482-8f2c-2f5748283eac","Type":"ContainerDied","Data":"d94f7f1c5d133cc49e22a31d2540d43c6dfef053f5429f745536d1ba594f619b"} Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.312353 4992 scope.go:117] "RemoveContainer" containerID="b763351b29fdfe47fd7d86c62800ff54570121abb6f54130d6d5f81628014120" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.312057 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zbc6l" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.314449 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccvgp" event={"ID":"5a7d128f-32e9-47c5-bac4-6e94898ea0b7","Type":"ContainerDied","Data":"08145902ca054443f7200a4fdac182e23ac0e254f24dc83d5b70e76188b60264"} Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.314534 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccvgp" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.319203 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2s2wb" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.319197 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2s2wb" event={"ID":"f06eb87f-2805-4e34-bbb7-86d5ee8d9f37","Type":"ContainerDied","Data":"9b85b7070868dfe8ad8755f1bb93e36aaa3490b51d63d3a1ce6a23c13f32bded"} Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.321249 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.321243 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4zftz" event={"ID":"07389d03-2315-4483-b6bc-c25d2fb69f53","Type":"ContainerDied","Data":"5a6f6aae5d4864bb54be8c5ede11455a0bd90f4de2f6e7fa05912d771a0d815b"} Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.331888 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzhk" event={"ID":"e27af7be-51b7-40ad-a740-9f9cc14fa328","Type":"ContainerDied","Data":"344ad35e191cc595eecded2b98448bc8a639f48302c956e9bbd02f9df7868249"} Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.331920 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzhk" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.333293 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" event={"ID":"f3229bda-a4f4-42ac-8936-829ab828fce4","Type":"ContainerStarted","Data":"13eebdcd8464b90914757080e56da0e5d8a73a36fcf962febe26f833adccbc3f"} Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.333339 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" event={"ID":"f3229bda-a4f4-42ac-8936-829ab828fce4","Type":"ContainerStarted","Data":"a09b5570b179adb0b922e747ff525d37cbc8856450bfef84f32fcb7e4641f7b0"} Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.333497 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.336904 4992 scope.go:117] "RemoveContainer" containerID="e18f667059d8177093e6117b585ff955ca44428daa6b73b693d26f0c6b738fe8" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.338948 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dkhhd" event={"ID":"4de0775a-dd54-436c-a5ff-fd6782a559a8","Type":"ContainerDied","Data":"9d66f06cd460c49b4e319e9589fb29da3fcdc30127908509c403187b681584dd"} Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.339163 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dkhhd" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.340714 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.353688 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-flz8s" podStartSLOduration=2.353670642 podStartE2EDuration="2.353670642s" podCreationTimestamp="2025-12-11 08:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:28:13.352791219 +0000 UTC m=+317.612265145" watchObservedRunningTime="2025-12-11 08:28:13.353670642 +0000 UTC m=+317.613144568" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.367120 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2s2wb"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.368902 4992 scope.go:117] "RemoveContainer" containerID="6eab330ad7b6f9cc0b95c8788d6bd1c8a5fecddd72b4a103a615df81b666c925" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.376194 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2s2wb"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.392716 4992 scope.go:117] "RemoveContainer" containerID="f966a07bc016a892f1d2ec08435d72f49df6fed820e5706808c556d16948c1be" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.416575 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4zftz"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.424883 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4zftz"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.431015 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccvgp"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.431151 4992 scope.go:117] "RemoveContainer" containerID="55761ba24cca98101fadd1f6ac968b4ead25fbe5d0e7aaeedbfadb39caa0d185" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.461843 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ccvgp"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.464126 4992 scope.go:117] "RemoveContainer" containerID="b71bf3a90e32be049913e290acb60082759f3dc8a1fbb10651c6f42ce0acd8c2" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.467328 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zbc6l"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.471001 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zbc6l"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.473785 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkzhk"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.476514 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lkzhk"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.477917 4992 scope.go:117] "RemoveContainer" containerID="a2eecc3a0bfd8d74dedd5b5d5d470f723743e667547a042e693ed92086b24657" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.479134 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dkhhd"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.482157 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dkhhd"] Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.493732 4992 scope.go:117] "RemoveContainer" containerID="e7f68d21e0179417c004412ea063275b2ae373071b752c64ace5a68b50cc6c33" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.509243 4992 scope.go:117] "RemoveContainer" containerID="94628c3f9c191e37ec65ae569da9e77cca8d705e9064355ad325dd682389c832" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.521796 4992 scope.go:117] "RemoveContainer" containerID="ebcfcf0491d4f1e3b26815bb950a9a6f7db4feedd645f1ebac40a484c7a007ba" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.534661 4992 scope.go:117] "RemoveContainer" containerID="2868de06e555506899add97378530e04066fbfcf61e5f8a04e6b915264ebd31b" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.548212 4992 scope.go:117] "RemoveContainer" containerID="0d181d2047116fe42a5f65cc89e4a27924c84910acae42168cda3683d67c7b59" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.580841 4992 scope.go:117] "RemoveContainer" containerID="f912d88e95b0b120b7d9d2f3b8873249a2e82c3d89cbd08f007f517b4ee12734" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.596865 4992 scope.go:117] "RemoveContainer" containerID="15eb2ac41312a2a00fce57fe2c5307be5c72ea7a59a646cd9a946d4d56729e96" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.610674 4992 scope.go:117] "RemoveContainer" containerID="e9985d19e210687d730f300e3c7c951350082bd577e460166618dec4a716cd7a" Dec 11 08:28:13 crc kubenswrapper[4992]: I1211 08:28:13.625806 4992 scope.go:117] "RemoveContainer" containerID="b7d811624a5a8e3d8c044bcd5c2a5a38e5480b14337179540e9de4de113c3525" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.109127 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07389d03-2315-4483-b6bc-c25d2fb69f53" path="/var/lib/kubelet/pods/07389d03-2315-4483-b6bc-c25d2fb69f53/volumes" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.110289 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" path="/var/lib/kubelet/pods/4a5362af-66f3-4482-8f2c-2f5748283eac/volumes" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.111603 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" path="/var/lib/kubelet/pods/4de0775a-dd54-436c-a5ff-fd6782a559a8/volumes" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.113801 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" path="/var/lib/kubelet/pods/5a7d128f-32e9-47c5-bac4-6e94898ea0b7/volumes" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.115047 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" path="/var/lib/kubelet/pods/e27af7be-51b7-40ad-a740-9f9cc14fa328/volumes" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.117539 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" path="/var/lib/kubelet/pods/f06eb87f-2805-4e34-bbb7-86d5ee8d9f37/volumes" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.348585 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.348734 4992 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="8d4dddd46e4215c580466c7e9cc7a91c53051518facf7a86d3ece0d1fb40c440" exitCode=137 Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.452954 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.453077 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482180 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482236 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482258 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482277 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482276 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482294 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482326 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482341 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482352 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482553 4992 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482578 4992 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482595 4992 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.482613 4992 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.491555 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:28:14 crc kubenswrapper[4992]: I1211 08:28:14.583895 4992 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:15 crc kubenswrapper[4992]: I1211 08:28:15.366224 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 08:28:15 crc kubenswrapper[4992]: I1211 08:28:15.366397 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 08:28:15 crc kubenswrapper[4992]: I1211 08:28:15.366425 4992 scope.go:117] "RemoveContainer" containerID="8d4dddd46e4215c580466c7e9cc7a91c53051518facf7a86d3ece0d1fb40c440" Dec 11 08:28:16 crc kubenswrapper[4992]: I1211 08:28:16.104898 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 11 08:28:16 crc kubenswrapper[4992]: I1211 08:28:16.105589 4992 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 11 08:28:16 crc kubenswrapper[4992]: I1211 08:28:16.119927 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 08:28:16 crc kubenswrapper[4992]: I1211 08:28:16.120006 4992 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="32fcd296-4b3a-4b74-9655-54763f54ca30" Dec 11 08:28:16 crc kubenswrapper[4992]: I1211 08:28:16.125977 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 08:28:16 crc kubenswrapper[4992]: I1211 08:28:16.126019 4992 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="32fcd296-4b3a-4b74-9655-54763f54ca30" Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.496417 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-46cbx"] Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.497357 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" podUID="83b64ec0-5648-49b2-9e7e-32834c30e7a9" containerName="controller-manager" containerID="cri-o://9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770" gracePeriod=30 Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.611021 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6"] Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.611298 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" podUID="306b229e-5b0e-4c77-83ce-f95f1176dc2b" containerName="route-controller-manager" containerID="cri-o://e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8" gracePeriod=30 Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.894221 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.971062 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-client-ca\") pod \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.971134 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-proxy-ca-bundles\") pod \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.971171 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-config\") pod \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.971241 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdpwq\" (UniqueName: \"kubernetes.io/projected/83b64ec0-5648-49b2-9e7e-32834c30e7a9-kube-api-access-tdpwq\") pod \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.971266 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b64ec0-5648-49b2-9e7e-32834c30e7a9-serving-cert\") pod \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\" (UID: \"83b64ec0-5648-49b2-9e7e-32834c30e7a9\") " Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.972419 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "83b64ec0-5648-49b2-9e7e-32834c30e7a9" (UID: "83b64ec0-5648-49b2-9e7e-32834c30e7a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.972460 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "83b64ec0-5648-49b2-9e7e-32834c30e7a9" (UID: "83b64ec0-5648-49b2-9e7e-32834c30e7a9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.972746 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-config" (OuterVolumeSpecName: "config") pod "83b64ec0-5648-49b2-9e7e-32834c30e7a9" (UID: "83b64ec0-5648-49b2-9e7e-32834c30e7a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.979363 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b64ec0-5648-49b2-9e7e-32834c30e7a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83b64ec0-5648-49b2-9e7e-32834c30e7a9" (UID: "83b64ec0-5648-49b2-9e7e-32834c30e7a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:28:28 crc kubenswrapper[4992]: I1211 08:28:28.985318 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b64ec0-5648-49b2-9e7e-32834c30e7a9-kube-api-access-tdpwq" (OuterVolumeSpecName: "kube-api-access-tdpwq") pod "83b64ec0-5648-49b2-9e7e-32834c30e7a9" (UID: "83b64ec0-5648-49b2-9e7e-32834c30e7a9"). InnerVolumeSpecName "kube-api-access-tdpwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.042870 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.072103 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-config\") pod \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.072259 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chzsn\" (UniqueName: \"kubernetes.io/projected/306b229e-5b0e-4c77-83ce-f95f1176dc2b-kube-api-access-chzsn\") pod \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.072300 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-client-ca\") pod \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.072378 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306b229e-5b0e-4c77-83ce-f95f1176dc2b-serving-cert\") pod \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\" (UID: \"306b229e-5b0e-4c77-83ce-f95f1176dc2b\") " Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.072693 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.072715 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdpwq\" (UniqueName: \"kubernetes.io/projected/83b64ec0-5648-49b2-9e7e-32834c30e7a9-kube-api-access-tdpwq\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.072729 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83b64ec0-5648-49b2-9e7e-32834c30e7a9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.072742 4992 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.072753 4992 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83b64ec0-5648-49b2-9e7e-32834c30e7a9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.073195 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-config" (OuterVolumeSpecName: "config") pod "306b229e-5b0e-4c77-83ce-f95f1176dc2b" (UID: "306b229e-5b0e-4c77-83ce-f95f1176dc2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.073437 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-client-ca" (OuterVolumeSpecName: "client-ca") pod "306b229e-5b0e-4c77-83ce-f95f1176dc2b" (UID: "306b229e-5b0e-4c77-83ce-f95f1176dc2b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.083762 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/306b229e-5b0e-4c77-83ce-f95f1176dc2b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "306b229e-5b0e-4c77-83ce-f95f1176dc2b" (UID: "306b229e-5b0e-4c77-83ce-f95f1176dc2b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.083924 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/306b229e-5b0e-4c77-83ce-f95f1176dc2b-kube-api-access-chzsn" (OuterVolumeSpecName: "kube-api-access-chzsn") pod "306b229e-5b0e-4c77-83ce-f95f1176dc2b" (UID: "306b229e-5b0e-4c77-83ce-f95f1176dc2b"). InnerVolumeSpecName "kube-api-access-chzsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.173150 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chzsn\" (UniqueName: \"kubernetes.io/projected/306b229e-5b0e-4c77-83ce-f95f1176dc2b-kube-api-access-chzsn\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.173198 4992 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.173211 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/306b229e-5b0e-4c77-83ce-f95f1176dc2b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.173221 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306b229e-5b0e-4c77-83ce-f95f1176dc2b-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.466297 4992 generic.go:334] "Generic (PLEG): container finished" podID="306b229e-5b0e-4c77-83ce-f95f1176dc2b" containerID="e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8" exitCode=0 Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.466378 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" event={"ID":"306b229e-5b0e-4c77-83ce-f95f1176dc2b","Type":"ContainerDied","Data":"e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8"} Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.466473 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" event={"ID":"306b229e-5b0e-4c77-83ce-f95f1176dc2b","Type":"ContainerDied","Data":"ab1e85b2dfbb0a45f510fb0bda506e998188e33b0811add921402c4c3e19ec7d"} Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.466400 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.466525 4992 scope.go:117] "RemoveContainer" containerID="e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.469925 4992 generic.go:334] "Generic (PLEG): container finished" podID="83b64ec0-5648-49b2-9e7e-32834c30e7a9" containerID="9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770" exitCode=0 Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.469983 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" event={"ID":"83b64ec0-5648-49b2-9e7e-32834c30e7a9","Type":"ContainerDied","Data":"9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770"} Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.470003 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.470022 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-46cbx" event={"ID":"83b64ec0-5648-49b2-9e7e-32834c30e7a9","Type":"ContainerDied","Data":"56f05104a51b5f0950772e17050568c10939da046adbf20df61a3e0413f0fe27"} Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.483439 4992 scope.go:117] "RemoveContainer" containerID="e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.484889 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8\": container with ID starting with e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8 not found: ID does not exist" containerID="e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.484943 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8"} err="failed to get container status \"e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8\": rpc error: code = NotFound desc = could not find container \"e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8\": container with ID starting with e16fa37ee35bece8f4a117d0b4b9dfb431fd0235282c4668e2939ebc6b722fe8 not found: ID does not exist" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.484978 4992 scope.go:117] "RemoveContainer" containerID="9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.502796 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6"] Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.509596 4992 scope.go:117] "RemoveContainer" containerID="9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.509914 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wjwg6"] Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.510538 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770\": container with ID starting with 9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770 not found: ID does not exist" containerID="9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.510581 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770"} err="failed to get container status \"9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770\": rpc error: code = NotFound desc = could not find container \"9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770\": container with ID starting with 9f7546d5a4d48b4995561f6ac9ef908b04125dae2e5f9d6944ba559072f88770 not found: ID does not exist" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.520151 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-46cbx"] Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.523298 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-46cbx"] Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.879759 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z5qcj"] Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880083 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880101 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880118 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880126 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880137 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880146 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880155 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880163 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880176 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880184 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880194 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880201 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880217 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880225 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880234 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880242 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880254 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880261 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerName="extract-content" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880272 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880279 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880292 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="306b229e-5b0e-4c77-83ce-f95f1176dc2b" containerName="route-controller-manager" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880300 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="306b229e-5b0e-4c77-83ce-f95f1176dc2b" containerName="route-controller-manager" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880312 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880319 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880334 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07389d03-2315-4483-b6bc-c25d2fb69f53" containerName="marketplace-operator" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880342 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="07389d03-2315-4483-b6bc-c25d2fb69f53" containerName="marketplace-operator" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880354 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880361 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880375 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880382 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880393 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b64ec0-5648-49b2-9e7e-32834c30e7a9" containerName="controller-manager" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880401 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b64ec0-5648-49b2-9e7e-32834c30e7a9" containerName="controller-manager" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880411 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880419 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerName="extract-utilities" Dec 11 08:28:29 crc kubenswrapper[4992]: E1211 08:28:29.880428 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880437 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880545 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06eb87f-2805-4e34-bbb7-86d5ee8d9f37" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880559 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a5362af-66f3-4482-8f2c-2f5748283eac" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880571 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="306b229e-5b0e-4c77-83ce-f95f1176dc2b" containerName="route-controller-manager" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880582 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b64ec0-5648-49b2-9e7e-32834c30e7a9" containerName="controller-manager" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880591 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="07389d03-2315-4483-b6bc-c25d2fb69f53" containerName="marketplace-operator" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880602 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7d128f-32e9-47c5-bac4-6e94898ea0b7" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880617 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de0775a-dd54-436c-a5ff-fd6782a559a8" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.880626 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27af7be-51b7-40ad-a740-9f9cc14fa328" containerName="registry-server" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.881157 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.887140 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.887675 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.887959 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.888154 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.888150 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.888268 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.890371 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z5qcj"] Dec 11 08:28:29 crc kubenswrapper[4992]: I1211 08:28:29.893731 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.082138 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-proxy-ca-bundles\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.082235 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvdc6\" (UniqueName: \"kubernetes.io/projected/aecbdeff-a1da-48d6-8e01-fb0723c441f6-kube-api-access-rvdc6\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.082275 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-client-ca\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.082315 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-config\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.082361 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aecbdeff-a1da-48d6-8e01-fb0723c441f6-serving-cert\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.110392 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="306b229e-5b0e-4c77-83ce-f95f1176dc2b" path="/var/lib/kubelet/pods/306b229e-5b0e-4c77-83ce-f95f1176dc2b/volumes" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.111761 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b64ec0-5648-49b2-9e7e-32834c30e7a9" path="/var/lib/kubelet/pods/83b64ec0-5648-49b2-9e7e-32834c30e7a9/volumes" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.183404 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-config\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.183520 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aecbdeff-a1da-48d6-8e01-fb0723c441f6-serving-cert\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.183572 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-proxy-ca-bundles\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.183612 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvdc6\" (UniqueName: \"kubernetes.io/projected/aecbdeff-a1da-48d6-8e01-fb0723c441f6-kube-api-access-rvdc6\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.183839 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-client-ca\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.185101 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-client-ca\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.185398 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-config\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.186304 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-proxy-ca-bundles\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.190470 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aecbdeff-a1da-48d6-8e01-fb0723c441f6-serving-cert\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.200112 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvdc6\" (UniqueName: \"kubernetes.io/projected/aecbdeff-a1da-48d6-8e01-fb0723c441f6-kube-api-access-rvdc6\") pod \"controller-manager-77bc486b6-z5qcj\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.205917 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.387132 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z5qcj"] Dec 11 08:28:30 crc kubenswrapper[4992]: W1211 08:28:30.398751 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaecbdeff_a1da_48d6_8e01_fb0723c441f6.slice/crio-a05ce718d25516e8c7d293eb3224dac78671fd7c9617afc76f1c03e870183e00 WatchSource:0}: Error finding container a05ce718d25516e8c7d293eb3224dac78671fd7c9617afc76f1c03e870183e00: Status 404 returned error can't find the container with id a05ce718d25516e8c7d293eb3224dac78671fd7c9617afc76f1c03e870183e00 Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.477549 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" event={"ID":"aecbdeff-a1da-48d6-8e01-fb0723c441f6","Type":"ContainerStarted","Data":"a05ce718d25516e8c7d293eb3224dac78671fd7c9617afc76f1c03e870183e00"} Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.878448 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp"] Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.879299 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.881860 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.882022 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.882310 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.883325 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.883343 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.884208 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.892298 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d54f8\" (UniqueName: \"kubernetes.io/projected/222813d8-c4a2-4dfe-977d-444ee3e926d2-kube-api-access-d54f8\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.892343 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-config\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.892381 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-client-ca\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.892438 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222813d8-c4a2-4dfe-977d-444ee3e926d2-serving-cert\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.892926 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp"] Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.993130 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d54f8\" (UniqueName: \"kubernetes.io/projected/222813d8-c4a2-4dfe-977d-444ee3e926d2-kube-api-access-d54f8\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.993183 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-config\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.993359 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-client-ca\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.993481 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222813d8-c4a2-4dfe-977d-444ee3e926d2-serving-cert\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.994090 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-client-ca\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:30 crc kubenswrapper[4992]: I1211 08:28:30.994224 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-config\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:31 crc kubenswrapper[4992]: I1211 08:28:30.999523 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222813d8-c4a2-4dfe-977d-444ee3e926d2-serving-cert\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:31 crc kubenswrapper[4992]: I1211 08:28:31.010515 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d54f8\" (UniqueName: \"kubernetes.io/projected/222813d8-c4a2-4dfe-977d-444ee3e926d2-kube-api-access-d54f8\") pod \"route-controller-manager-7ff7586b44-p8ldp\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:31 crc kubenswrapper[4992]: I1211 08:28:31.194998 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:31 crc kubenswrapper[4992]: I1211 08:28:31.387340 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp"] Dec 11 08:28:31 crc kubenswrapper[4992]: I1211 08:28:31.484911 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" event={"ID":"222813d8-c4a2-4dfe-977d-444ee3e926d2","Type":"ContainerStarted","Data":"9ef7e3119fdd74f535b93322ebae4c3bc5d7a0e65a396d8c6cfbb6992223fa89"} Dec 11 08:28:31 crc kubenswrapper[4992]: I1211 08:28:31.489100 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" event={"ID":"aecbdeff-a1da-48d6-8e01-fb0723c441f6","Type":"ContainerStarted","Data":"a3ef74eb3a81f94eeb15b4bcbabde20fe3c495f58abf153e5b5a3640a3180d7c"} Dec 11 08:28:31 crc kubenswrapper[4992]: I1211 08:28:31.489413 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:31 crc kubenswrapper[4992]: I1211 08:28:31.496749 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:28:31 crc kubenswrapper[4992]: I1211 08:28:31.517068 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" podStartSLOduration=3.517039699 podStartE2EDuration="3.517039699s" podCreationTimestamp="2025-12-11 08:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:28:31.510294314 +0000 UTC m=+335.769768240" watchObservedRunningTime="2025-12-11 08:28:31.517039699 +0000 UTC m=+335.776513625" Dec 11 08:28:32 crc kubenswrapper[4992]: I1211 08:28:32.496037 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" event={"ID":"222813d8-c4a2-4dfe-977d-444ee3e926d2","Type":"ContainerStarted","Data":"e1520d81a9d3c798e6ad7c334e40c751e460a7643cadaab61710e0ec784afec9"} Dec 11 08:28:32 crc kubenswrapper[4992]: I1211 08:28:32.526147 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" podStartSLOduration=4.526120007 podStartE2EDuration="4.526120007s" podCreationTimestamp="2025-12-11 08:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:28:32.523652529 +0000 UTC m=+336.783126465" watchObservedRunningTime="2025-12-11 08:28:32.526120007 +0000 UTC m=+336.785593933" Dec 11 08:28:33 crc kubenswrapper[4992]: I1211 08:28:33.501688 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:28:33 crc kubenswrapper[4992]: I1211 08:28:33.509891 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.378373 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.379328 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.580730 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q748k"] Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.582854 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.586589 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.588919 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q748k"] Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.750352 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgt9\" (UniqueName: \"kubernetes.io/projected/22fe6c50-eedd-446c-8475-80ecb4676613-kube-api-access-hkgt9\") pod \"redhat-operators-q748k\" (UID: \"22fe6c50-eedd-446c-8475-80ecb4676613\") " pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.750423 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22fe6c50-eedd-446c-8475-80ecb4676613-catalog-content\") pod \"redhat-operators-q748k\" (UID: \"22fe6c50-eedd-446c-8475-80ecb4676613\") " pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.750550 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22fe6c50-eedd-446c-8475-80ecb4676613-utilities\") pod \"redhat-operators-q748k\" (UID: \"22fe6c50-eedd-446c-8475-80ecb4676613\") " pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.780599 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7s6x7"] Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.782263 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.784904 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.786540 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7s6x7"] Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.851654 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgt9\" (UniqueName: \"kubernetes.io/projected/22fe6c50-eedd-446c-8475-80ecb4676613-kube-api-access-hkgt9\") pod \"redhat-operators-q748k\" (UID: \"22fe6c50-eedd-446c-8475-80ecb4676613\") " pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.851730 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22fe6c50-eedd-446c-8475-80ecb4676613-catalog-content\") pod \"redhat-operators-q748k\" (UID: \"22fe6c50-eedd-446c-8475-80ecb4676613\") " pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.851805 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22fe6c50-eedd-446c-8475-80ecb4676613-utilities\") pod \"redhat-operators-q748k\" (UID: \"22fe6c50-eedd-446c-8475-80ecb4676613\") " pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.853347 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22fe6c50-eedd-446c-8475-80ecb4676613-utilities\") pod \"redhat-operators-q748k\" (UID: \"22fe6c50-eedd-446c-8475-80ecb4676613\") " pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.853503 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22fe6c50-eedd-446c-8475-80ecb4676613-catalog-content\") pod \"redhat-operators-q748k\" (UID: \"22fe6c50-eedd-446c-8475-80ecb4676613\") " pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.872659 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgt9\" (UniqueName: \"kubernetes.io/projected/22fe6c50-eedd-446c-8475-80ecb4676613-kube-api-access-hkgt9\") pod \"redhat-operators-q748k\" (UID: \"22fe6c50-eedd-446c-8475-80ecb4676613\") " pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.909720 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.953804 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7656\" (UniqueName: \"kubernetes.io/projected/3369c4a5-b910-47d9-b2e5-92cedd6b0ef2-kube-api-access-v7656\") pod \"community-operators-7s6x7\" (UID: \"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2\") " pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.954364 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3369c4a5-b910-47d9-b2e5-92cedd6b0ef2-utilities\") pod \"community-operators-7s6x7\" (UID: \"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2\") " pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:05 crc kubenswrapper[4992]: I1211 08:29:05.954430 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3369c4a5-b910-47d9-b2e5-92cedd6b0ef2-catalog-content\") pod \"community-operators-7s6x7\" (UID: \"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2\") " pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.055901 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7656\" (UniqueName: \"kubernetes.io/projected/3369c4a5-b910-47d9-b2e5-92cedd6b0ef2-kube-api-access-v7656\") pod \"community-operators-7s6x7\" (UID: \"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2\") " pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.055982 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3369c4a5-b910-47d9-b2e5-92cedd6b0ef2-utilities\") pod \"community-operators-7s6x7\" (UID: \"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2\") " pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.056022 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3369c4a5-b910-47d9-b2e5-92cedd6b0ef2-catalog-content\") pod \"community-operators-7s6x7\" (UID: \"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2\") " pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.056697 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3369c4a5-b910-47d9-b2e5-92cedd6b0ef2-catalog-content\") pod \"community-operators-7s6x7\" (UID: \"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2\") " pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.057479 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3369c4a5-b910-47d9-b2e5-92cedd6b0ef2-utilities\") pod \"community-operators-7s6x7\" (UID: \"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2\") " pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.075846 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7656\" (UniqueName: \"kubernetes.io/projected/3369c4a5-b910-47d9-b2e5-92cedd6b0ef2-kube-api-access-v7656\") pod \"community-operators-7s6x7\" (UID: \"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2\") " pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.101815 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.338393 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q748k"] Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.521978 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7s6x7"] Dec 11 08:29:06 crc kubenswrapper[4992]: W1211 08:29:06.573237 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3369c4a5_b910_47d9_b2e5_92cedd6b0ef2.slice/crio-369e01413f3bd9d7679c151070eff1ee65ec2df955291c51a882cb72644a7f2e WatchSource:0}: Error finding container 369e01413f3bd9d7679c151070eff1ee65ec2df955291c51a882cb72644a7f2e: Status 404 returned error can't find the container with id 369e01413f3bd9d7679c151070eff1ee65ec2df955291c51a882cb72644a7f2e Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.724419 4992 generic.go:334] "Generic (PLEG): container finished" podID="3369c4a5-b910-47d9-b2e5-92cedd6b0ef2" containerID="0da5176b1f79f51f24c561c00e1af2e6f2444cfdcf6b4e261f9f59cc634240d2" exitCode=0 Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.724706 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s6x7" event={"ID":"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2","Type":"ContainerDied","Data":"0da5176b1f79f51f24c561c00e1af2e6f2444cfdcf6b4e261f9f59cc634240d2"} Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.724760 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s6x7" event={"ID":"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2","Type":"ContainerStarted","Data":"369e01413f3bd9d7679c151070eff1ee65ec2df955291c51a882cb72644a7f2e"} Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.728091 4992 generic.go:334] "Generic (PLEG): container finished" podID="22fe6c50-eedd-446c-8475-80ecb4676613" containerID="cb648ca91d12af85eb3c80bbb72e6b439458ead68b05754158405d38f804ccb1" exitCode=0 Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.728137 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q748k" event={"ID":"22fe6c50-eedd-446c-8475-80ecb4676613","Type":"ContainerDied","Data":"cb648ca91d12af85eb3c80bbb72e6b439458ead68b05754158405d38f804ccb1"} Dec 11 08:29:06 crc kubenswrapper[4992]: I1211 08:29:06.728168 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q748k" event={"ID":"22fe6c50-eedd-446c-8475-80ecb4676613","Type":"ContainerStarted","Data":"562c1fa34af5dee539a08e20c9e5f440a8577ca8ccdcbbcabb015cf61a2947ae"} Dec 11 08:29:07 crc kubenswrapper[4992]: I1211 08:29:07.734544 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s6x7" event={"ID":"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2","Type":"ContainerStarted","Data":"67bb04827f4a13c0cab2d0fe66b999895fd9d6a36562a3078dc6fb1ff43524b8"} Dec 11 08:29:07 crc kubenswrapper[4992]: I1211 08:29:07.736475 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q748k" event={"ID":"22fe6c50-eedd-446c-8475-80ecb4676613","Type":"ContainerStarted","Data":"f3d99cc51bde41d8efdab368bc3308751dcb3b2d5802e27db5e7faf652591ab9"} Dec 11 08:29:07 crc kubenswrapper[4992]: I1211 08:29:07.967540 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wxlmp"] Dec 11 08:29:07 crc kubenswrapper[4992]: I1211 08:29:07.968598 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:07 crc kubenswrapper[4992]: I1211 08:29:07.970120 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 08:29:07 crc kubenswrapper[4992]: I1211 08:29:07.987856 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxlmp"] Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.089186 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jv9q\" (UniqueName: \"kubernetes.io/projected/125324f4-c036-4fd9-aa27-4f9e5774b59e-kube-api-access-6jv9q\") pod \"certified-operators-wxlmp\" (UID: \"125324f4-c036-4fd9-aa27-4f9e5774b59e\") " pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.089506 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125324f4-c036-4fd9-aa27-4f9e5774b59e-catalog-content\") pod \"certified-operators-wxlmp\" (UID: \"125324f4-c036-4fd9-aa27-4f9e5774b59e\") " pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.089613 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125324f4-c036-4fd9-aa27-4f9e5774b59e-utilities\") pod \"certified-operators-wxlmp\" (UID: \"125324f4-c036-4fd9-aa27-4f9e5774b59e\") " pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.171101 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qqwws"] Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.172352 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.174681 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.190973 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125324f4-c036-4fd9-aa27-4f9e5774b59e-catalog-content\") pod \"certified-operators-wxlmp\" (UID: \"125324f4-c036-4fd9-aa27-4f9e5774b59e\") " pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.191048 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125324f4-c036-4fd9-aa27-4f9e5774b59e-utilities\") pod \"certified-operators-wxlmp\" (UID: \"125324f4-c036-4fd9-aa27-4f9e5774b59e\") " pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.191873 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jv9q\" (UniqueName: \"kubernetes.io/projected/125324f4-c036-4fd9-aa27-4f9e5774b59e-kube-api-access-6jv9q\") pod \"certified-operators-wxlmp\" (UID: \"125324f4-c036-4fd9-aa27-4f9e5774b59e\") " pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.192184 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125324f4-c036-4fd9-aa27-4f9e5774b59e-catalog-content\") pod \"certified-operators-wxlmp\" (UID: \"125324f4-c036-4fd9-aa27-4f9e5774b59e\") " pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.192336 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125324f4-c036-4fd9-aa27-4f9e5774b59e-utilities\") pod \"certified-operators-wxlmp\" (UID: \"125324f4-c036-4fd9-aa27-4f9e5774b59e\") " pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.197905 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqwws"] Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.216668 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jv9q\" (UniqueName: \"kubernetes.io/projected/125324f4-c036-4fd9-aa27-4f9e5774b59e-kube-api-access-6jv9q\") pod \"certified-operators-wxlmp\" (UID: \"125324f4-c036-4fd9-aa27-4f9e5774b59e\") " pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.293518 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2d8904-1dcb-4d16-82e8-9db4a8d986ef-catalog-content\") pod \"redhat-marketplace-qqwws\" (UID: \"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef\") " pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.293627 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxk9\" (UniqueName: \"kubernetes.io/projected/9a2d8904-1dcb-4d16-82e8-9db4a8d986ef-kube-api-access-vhxk9\") pod \"redhat-marketplace-qqwws\" (UID: \"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef\") " pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.293722 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2d8904-1dcb-4d16-82e8-9db4a8d986ef-utilities\") pod \"redhat-marketplace-qqwws\" (UID: \"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef\") " pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.307113 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.396207 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhxk9\" (UniqueName: \"kubernetes.io/projected/9a2d8904-1dcb-4d16-82e8-9db4a8d986ef-kube-api-access-vhxk9\") pod \"redhat-marketplace-qqwws\" (UID: \"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef\") " pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.396532 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2d8904-1dcb-4d16-82e8-9db4a8d986ef-utilities\") pod \"redhat-marketplace-qqwws\" (UID: \"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef\") " pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.396676 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2d8904-1dcb-4d16-82e8-9db4a8d986ef-catalog-content\") pod \"redhat-marketplace-qqwws\" (UID: \"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef\") " pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.397471 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a2d8904-1dcb-4d16-82e8-9db4a8d986ef-utilities\") pod \"redhat-marketplace-qqwws\" (UID: \"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef\") " pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.397540 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a2d8904-1dcb-4d16-82e8-9db4a8d986ef-catalog-content\") pod \"redhat-marketplace-qqwws\" (UID: \"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef\") " pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.421507 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhxk9\" (UniqueName: \"kubernetes.io/projected/9a2d8904-1dcb-4d16-82e8-9db4a8d986ef-kube-api-access-vhxk9\") pod \"redhat-marketplace-qqwws\" (UID: \"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef\") " pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.475369 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp"] Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.475603 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" podUID="222813d8-c4a2-4dfe-977d-444ee3e926d2" containerName="route-controller-manager" containerID="cri-o://e1520d81a9d3c798e6ad7c334e40c751e460a7643cadaab61710e0ec784afec9" gracePeriod=30 Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.555756 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.706269 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxlmp"] Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.749663 4992 generic.go:334] "Generic (PLEG): container finished" podID="222813d8-c4a2-4dfe-977d-444ee3e926d2" containerID="e1520d81a9d3c798e6ad7c334e40c751e460a7643cadaab61710e0ec784afec9" exitCode=0 Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.749730 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" event={"ID":"222813d8-c4a2-4dfe-977d-444ee3e926d2","Type":"ContainerDied","Data":"e1520d81a9d3c798e6ad7c334e40c751e460a7643cadaab61710e0ec784afec9"} Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.751953 4992 generic.go:334] "Generic (PLEG): container finished" podID="22fe6c50-eedd-446c-8475-80ecb4676613" containerID="f3d99cc51bde41d8efdab368bc3308751dcb3b2d5802e27db5e7faf652591ab9" exitCode=0 Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.752018 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q748k" event={"ID":"22fe6c50-eedd-446c-8475-80ecb4676613","Type":"ContainerDied","Data":"f3d99cc51bde41d8efdab368bc3308751dcb3b2d5802e27db5e7faf652591ab9"} Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.758743 4992 generic.go:334] "Generic (PLEG): container finished" podID="3369c4a5-b910-47d9-b2e5-92cedd6b0ef2" containerID="67bb04827f4a13c0cab2d0fe66b999895fd9d6a36562a3078dc6fb1ff43524b8" exitCode=0 Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.758833 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s6x7" event={"ID":"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2","Type":"ContainerDied","Data":"67bb04827f4a13c0cab2d0fe66b999895fd9d6a36562a3078dc6fb1ff43524b8"} Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.759722 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxlmp" event={"ID":"125324f4-c036-4fd9-aa27-4f9e5774b59e","Type":"ContainerStarted","Data":"45e9f4226763762fe8a39043671d6ea70bc8170213fc4a26adf4809c4bc29dca"} Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.838983 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:29:08 crc kubenswrapper[4992]: I1211 08:29:08.951726 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qqwws"] Dec 11 08:29:08 crc kubenswrapper[4992]: W1211 08:29:08.955101 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a2d8904_1dcb_4d16_82e8_9db4a8d986ef.slice/crio-ec8e56fd8bdc68dd6a7bf560a948643d78eb7316f391ea07dd0e063f085d9442 WatchSource:0}: Error finding container ec8e56fd8bdc68dd6a7bf560a948643d78eb7316f391ea07dd0e063f085d9442: Status 404 returned error can't find the container with id ec8e56fd8bdc68dd6a7bf560a948643d78eb7316f391ea07dd0e063f085d9442 Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.003979 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222813d8-c4a2-4dfe-977d-444ee3e926d2-serving-cert\") pod \"222813d8-c4a2-4dfe-977d-444ee3e926d2\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.004076 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d54f8\" (UniqueName: \"kubernetes.io/projected/222813d8-c4a2-4dfe-977d-444ee3e926d2-kube-api-access-d54f8\") pod \"222813d8-c4a2-4dfe-977d-444ee3e926d2\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.004111 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-config\") pod \"222813d8-c4a2-4dfe-977d-444ee3e926d2\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.004198 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-client-ca\") pod \"222813d8-c4a2-4dfe-977d-444ee3e926d2\" (UID: \"222813d8-c4a2-4dfe-977d-444ee3e926d2\") " Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.005657 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "222813d8-c4a2-4dfe-977d-444ee3e926d2" (UID: "222813d8-c4a2-4dfe-977d-444ee3e926d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.012405 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222813d8-c4a2-4dfe-977d-444ee3e926d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "222813d8-c4a2-4dfe-977d-444ee3e926d2" (UID: "222813d8-c4a2-4dfe-977d-444ee3e926d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.012908 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-config" (OuterVolumeSpecName: "config") pod "222813d8-c4a2-4dfe-977d-444ee3e926d2" (UID: "222813d8-c4a2-4dfe-977d-444ee3e926d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.020933 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222813d8-c4a2-4dfe-977d-444ee3e926d2-kube-api-access-d54f8" (OuterVolumeSpecName: "kube-api-access-d54f8") pod "222813d8-c4a2-4dfe-977d-444ee3e926d2" (UID: "222813d8-c4a2-4dfe-977d-444ee3e926d2"). InnerVolumeSpecName "kube-api-access-d54f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.105869 4992 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.105903 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222813d8-c4a2-4dfe-977d-444ee3e926d2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.106097 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d54f8\" (UniqueName: \"kubernetes.io/projected/222813d8-c4a2-4dfe-977d-444ee3e926d2-kube-api-access-d54f8\") on node \"crc\" DevicePath \"\"" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.106108 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222813d8-c4a2-4dfe-977d-444ee3e926d2-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.767981 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7s6x7" event={"ID":"3369c4a5-b910-47d9-b2e5-92cedd6b0ef2","Type":"ContainerStarted","Data":"5382b14620fa21a18414f4e5dff7e90b84ec5c18627d955c5fdb36493b522cbd"} Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.769674 4992 generic.go:334] "Generic (PLEG): container finished" podID="125324f4-c036-4fd9-aa27-4f9e5774b59e" containerID="897796c2c10bc5f1a44aa1053043f52cfdb199b8e2a8b52369a02f9d72c47a7d" exitCode=0 Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.769746 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxlmp" event={"ID":"125324f4-c036-4fd9-aa27-4f9e5774b59e","Type":"ContainerDied","Data":"897796c2c10bc5f1a44aa1053043f52cfdb199b8e2a8b52369a02f9d72c47a7d"} Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.771650 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" event={"ID":"222813d8-c4a2-4dfe-977d-444ee3e926d2","Type":"ContainerDied","Data":"9ef7e3119fdd74f535b93322ebae4c3bc5d7a0e65a396d8c6cfbb6992223fa89"} Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.771660 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.771685 4992 scope.go:117] "RemoveContainer" containerID="e1520d81a9d3c798e6ad7c334e40c751e460a7643cadaab61710e0ec784afec9" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.778679 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q748k" event={"ID":"22fe6c50-eedd-446c-8475-80ecb4676613","Type":"ContainerStarted","Data":"974c6e8d7b30bc8d4badd3c486c2982428ace90664b94cc5c0c96e3d3128e6eb"} Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.782981 4992 generic.go:334] "Generic (PLEG): container finished" podID="9a2d8904-1dcb-4d16-82e8-9db4a8d986ef" containerID="4b599c2a61a310ed8b4cf90a82780c15b7d5113c0add5960dd70846ef576f874" exitCode=0 Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.783065 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqwws" event={"ID":"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef","Type":"ContainerDied","Data":"4b599c2a61a310ed8b4cf90a82780c15b7d5113c0add5960dd70846ef576f874"} Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.783128 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqwws" event={"ID":"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef","Type":"ContainerStarted","Data":"ec8e56fd8bdc68dd6a7bf560a948643d78eb7316f391ea07dd0e063f085d9442"} Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.787192 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7s6x7" podStartSLOduration=2.323905193 podStartE2EDuration="4.787176893s" podCreationTimestamp="2025-12-11 08:29:05 +0000 UTC" firstStartedPulling="2025-12-11 08:29:06.725954089 +0000 UTC m=+370.985428015" lastFinishedPulling="2025-12-11 08:29:09.189225779 +0000 UTC m=+373.448699715" observedRunningTime="2025-12-11 08:29:09.785334776 +0000 UTC m=+374.044808702" watchObservedRunningTime="2025-12-11 08:29:09.787176893 +0000 UTC m=+374.046650809" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.811249 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q748k" podStartSLOduration=2.222630449 podStartE2EDuration="4.811230032s" podCreationTimestamp="2025-12-11 08:29:05 +0000 UTC" firstStartedPulling="2025-12-11 08:29:06.730238625 +0000 UTC m=+370.989712551" lastFinishedPulling="2025-12-11 08:29:09.318838208 +0000 UTC m=+373.578312134" observedRunningTime="2025-12-11 08:29:09.806457567 +0000 UTC m=+374.065931493" watchObservedRunningTime="2025-12-11 08:29:09.811230032 +0000 UTC m=+374.070703958" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.865817 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp"] Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.869026 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p8ldp"] Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.897741 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj"] Dec 11 08:29:09 crc kubenswrapper[4992]: E1211 08:29:09.898006 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222813d8-c4a2-4dfe-977d-444ee3e926d2" containerName="route-controller-manager" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.898021 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="222813d8-c4a2-4dfe-977d-444ee3e926d2" containerName="route-controller-manager" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.898132 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="222813d8-c4a2-4dfe-977d-444ee3e926d2" containerName="route-controller-manager" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.898574 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.902934 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.903172 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.903377 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.903463 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.903490 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.904139 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 08:29:09 crc kubenswrapper[4992]: I1211 08:29:09.911495 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj"] Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.020098 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d72ae82-11ca-4800-a530-9d21fd137556-serving-cert\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.024236 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjc2\" (UniqueName: \"kubernetes.io/projected/9d72ae82-11ca-4800-a530-9d21fd137556-kube-api-access-qxjc2\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.024293 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d72ae82-11ca-4800-a530-9d21fd137556-client-ca\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.024345 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d72ae82-11ca-4800-a530-9d21fd137556-config\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.103927 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="222813d8-c4a2-4dfe-977d-444ee3e926d2" path="/var/lib/kubelet/pods/222813d8-c4a2-4dfe-977d-444ee3e926d2/volumes" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.126468 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d72ae82-11ca-4800-a530-9d21fd137556-serving-cert\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.126580 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjc2\" (UniqueName: \"kubernetes.io/projected/9d72ae82-11ca-4800-a530-9d21fd137556-kube-api-access-qxjc2\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.126621 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d72ae82-11ca-4800-a530-9d21fd137556-client-ca\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.126686 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d72ae82-11ca-4800-a530-9d21fd137556-config\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.128521 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d72ae82-11ca-4800-a530-9d21fd137556-client-ca\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.128759 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d72ae82-11ca-4800-a530-9d21fd137556-config\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.142962 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d72ae82-11ca-4800-a530-9d21fd137556-serving-cert\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.149987 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjc2\" (UniqueName: \"kubernetes.io/projected/9d72ae82-11ca-4800-a530-9d21fd137556-kube-api-access-qxjc2\") pod \"route-controller-manager-566776b648-cqlvj\" (UID: \"9d72ae82-11ca-4800-a530-9d21fd137556\") " pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.213739 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.417143 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj"] Dec 11 08:29:10 crc kubenswrapper[4992]: W1211 08:29:10.423986 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d72ae82_11ca_4800_a530_9d21fd137556.slice/crio-ac224e7ded1e5b13f6ab7b30cfb79b1c558fce543f278a988c1eed97bbc45710 WatchSource:0}: Error finding container ac224e7ded1e5b13f6ab7b30cfb79b1c558fce543f278a988c1eed97bbc45710: Status 404 returned error can't find the container with id ac224e7ded1e5b13f6ab7b30cfb79b1c558fce543f278a988c1eed97bbc45710 Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.793413 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" event={"ID":"9d72ae82-11ca-4800-a530-9d21fd137556","Type":"ContainerStarted","Data":"f41e311dec6e873d38d2cf9ce950624863da2e221d21a6254963f619ec0274f8"} Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.793899 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" event={"ID":"9d72ae82-11ca-4800-a530-9d21fd137556","Type":"ContainerStarted","Data":"ac224e7ded1e5b13f6ab7b30cfb79b1c558fce543f278a988c1eed97bbc45710"} Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.793926 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.795215 4992 generic.go:334] "Generic (PLEG): container finished" podID="125324f4-c036-4fd9-aa27-4f9e5774b59e" containerID="4b11a4755d192ab9b6850ae7886eef7788299e6273e399a9b3672b554b3bb60c" exitCode=0 Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.795552 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxlmp" event={"ID":"125324f4-c036-4fd9-aa27-4f9e5774b59e","Type":"ContainerDied","Data":"4b11a4755d192ab9b6850ae7886eef7788299e6273e399a9b3672b554b3bb60c"} Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.799386 4992 generic.go:334] "Generic (PLEG): container finished" podID="9a2d8904-1dcb-4d16-82e8-9db4a8d986ef" containerID="612fbb016de51e414100e45fc62062162a56af5a165101214fdb40fb57580688" exitCode=0 Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.799611 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqwws" event={"ID":"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef","Type":"ContainerDied","Data":"612fbb016de51e414100e45fc62062162a56af5a165101214fdb40fb57580688"} Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.819496 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" podStartSLOduration=2.819473494 podStartE2EDuration="2.819473494s" podCreationTimestamp="2025-12-11 08:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:29:10.816144138 +0000 UTC m=+375.075618064" watchObservedRunningTime="2025-12-11 08:29:10.819473494 +0000 UTC m=+375.078947410" Dec 11 08:29:10 crc kubenswrapper[4992]: I1211 08:29:10.951466 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-566776b648-cqlvj" Dec 11 08:29:11 crc kubenswrapper[4992]: I1211 08:29:11.814778 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxlmp" event={"ID":"125324f4-c036-4fd9-aa27-4f9e5774b59e","Type":"ContainerStarted","Data":"426bff637c018156765bb61d792e0bdb39b1a18106162a37d8bf87a51be4543c"} Dec 11 08:29:11 crc kubenswrapper[4992]: I1211 08:29:11.818183 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qqwws" event={"ID":"9a2d8904-1dcb-4d16-82e8-9db4a8d986ef","Type":"ContainerStarted","Data":"86d5876414b22a84944a05dd51c59bba536aee06c8f1ec1ae36a149fba8f0310"} Dec 11 08:29:11 crc kubenswrapper[4992]: I1211 08:29:11.835619 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wxlmp" podStartSLOduration=3.3357209230000002 podStartE2EDuration="4.835598783s" podCreationTimestamp="2025-12-11 08:29:07 +0000 UTC" firstStartedPulling="2025-12-11 08:29:09.771945059 +0000 UTC m=+374.031418985" lastFinishedPulling="2025-12-11 08:29:11.271822919 +0000 UTC m=+375.531296845" observedRunningTime="2025-12-11 08:29:11.832505992 +0000 UTC m=+376.091979918" watchObservedRunningTime="2025-12-11 08:29:11.835598783 +0000 UTC m=+376.095072709" Dec 11 08:29:11 crc kubenswrapper[4992]: I1211 08:29:11.853192 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qqwws" podStartSLOduration=2.403519893 podStartE2EDuration="3.853173583s" podCreationTimestamp="2025-12-11 08:29:08 +0000 UTC" firstStartedPulling="2025-12-11 08:29:09.790214943 +0000 UTC m=+374.049688869" lastFinishedPulling="2025-12-11 08:29:11.239868633 +0000 UTC m=+375.499342559" observedRunningTime="2025-12-11 08:29:11.851302096 +0000 UTC m=+376.110776022" watchObservedRunningTime="2025-12-11 08:29:11.853173583 +0000 UTC m=+376.112647509" Dec 11 08:29:15 crc kubenswrapper[4992]: I1211 08:29:15.910651 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:15 crc kubenswrapper[4992]: I1211 08:29:15.911070 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:15 crc kubenswrapper[4992]: I1211 08:29:15.950107 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:16 crc kubenswrapper[4992]: I1211 08:29:16.103346 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:16 crc kubenswrapper[4992]: I1211 08:29:16.103386 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:16 crc kubenswrapper[4992]: I1211 08:29:16.143074 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:16 crc kubenswrapper[4992]: I1211 08:29:16.902371 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7s6x7" Dec 11 08:29:16 crc kubenswrapper[4992]: I1211 08:29:16.905593 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q748k" Dec 11 08:29:18 crc kubenswrapper[4992]: I1211 08:29:18.308427 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:18 crc kubenswrapper[4992]: I1211 08:29:18.308713 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:18 crc kubenswrapper[4992]: I1211 08:29:18.372790 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:18 crc kubenswrapper[4992]: I1211 08:29:18.556598 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:18 crc kubenswrapper[4992]: I1211 08:29:18.556779 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:18 crc kubenswrapper[4992]: I1211 08:29:18.617331 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:18 crc kubenswrapper[4992]: I1211 08:29:18.933539 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qqwws" Dec 11 08:29:18 crc kubenswrapper[4992]: I1211 08:29:18.933992 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wxlmp" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.673470 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-87mdh"] Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.674842 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.695723 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-87mdh"] Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.815835 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f189dbd0-7553-4099-a705-1f5bb7fd90d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.815934 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.815979 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f189dbd0-7553-4099-a705-1f5bb7fd90d9-registry-tls\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.816040 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f189dbd0-7553-4099-a705-1f5bb7fd90d9-registry-certificates\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.816077 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f189dbd0-7553-4099-a705-1f5bb7fd90d9-bound-sa-token\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.816112 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f189dbd0-7553-4099-a705-1f5bb7fd90d9-trusted-ca\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.816319 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzwv2\" (UniqueName: \"kubernetes.io/projected/f189dbd0-7553-4099-a705-1f5bb7fd90d9-kube-api-access-rzwv2\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.816527 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f189dbd0-7553-4099-a705-1f5bb7fd90d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.839951 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.917904 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzwv2\" (UniqueName: \"kubernetes.io/projected/f189dbd0-7553-4099-a705-1f5bb7fd90d9-kube-api-access-rzwv2\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.918322 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f189dbd0-7553-4099-a705-1f5bb7fd90d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.918428 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f189dbd0-7553-4099-a705-1f5bb7fd90d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.918525 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f189dbd0-7553-4099-a705-1f5bb7fd90d9-registry-tls\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.918618 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f189dbd0-7553-4099-a705-1f5bb7fd90d9-registry-certificates\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.918735 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f189dbd0-7553-4099-a705-1f5bb7fd90d9-bound-sa-token\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.918823 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f189dbd0-7553-4099-a705-1f5bb7fd90d9-trusted-ca\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.919423 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f189dbd0-7553-4099-a705-1f5bb7fd90d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.920209 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f189dbd0-7553-4099-a705-1f5bb7fd90d9-registry-certificates\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.920549 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f189dbd0-7553-4099-a705-1f5bb7fd90d9-trusted-ca\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.930170 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f189dbd0-7553-4099-a705-1f5bb7fd90d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.930503 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f189dbd0-7553-4099-a705-1f5bb7fd90d9-registry-tls\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.941235 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzwv2\" (UniqueName: \"kubernetes.io/projected/f189dbd0-7553-4099-a705-1f5bb7fd90d9-kube-api-access-rzwv2\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.943002 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f189dbd0-7553-4099-a705-1f5bb7fd90d9-bound-sa-token\") pod \"image-registry-66df7c8f76-87mdh\" (UID: \"f189dbd0-7553-4099-a705-1f5bb7fd90d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:23 crc kubenswrapper[4992]: I1211 08:29:23.994065 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:24 crc kubenswrapper[4992]: I1211 08:29:24.464426 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-87mdh"] Dec 11 08:29:24 crc kubenswrapper[4992]: W1211 08:29:24.471068 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf189dbd0_7553_4099_a705_1f5bb7fd90d9.slice/crio-7456c2b5682ad6352eb3eb6441a1d6fbdf211dbb7e194550523bc1d76193ba3e WatchSource:0}: Error finding container 7456c2b5682ad6352eb3eb6441a1d6fbdf211dbb7e194550523bc1d76193ba3e: Status 404 returned error can't find the container with id 7456c2b5682ad6352eb3eb6441a1d6fbdf211dbb7e194550523bc1d76193ba3e Dec 11 08:29:24 crc kubenswrapper[4992]: I1211 08:29:24.911923 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" event={"ID":"f189dbd0-7553-4099-a705-1f5bb7fd90d9","Type":"ContainerStarted","Data":"ea44c59501237a952f949dd1a8d0bc285266d8de47c7d188eb83f6d1291df983"} Dec 11 08:29:24 crc kubenswrapper[4992]: I1211 08:29:24.912001 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" event={"ID":"f189dbd0-7553-4099-a705-1f5bb7fd90d9","Type":"ContainerStarted","Data":"7456c2b5682ad6352eb3eb6441a1d6fbdf211dbb7e194550523bc1d76193ba3e"} Dec 11 08:29:24 crc kubenswrapper[4992]: I1211 08:29:24.912789 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:24 crc kubenswrapper[4992]: I1211 08:29:24.935035 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" podStartSLOduration=1.935002901 podStartE2EDuration="1.935002901s" podCreationTimestamp="2025-12-11 08:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:29:24.932371108 +0000 UTC m=+389.191845054" watchObservedRunningTime="2025-12-11 08:29:24.935002901 +0000 UTC m=+389.194476877" Dec 11 08:29:28 crc kubenswrapper[4992]: I1211 08:29:28.512225 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z5qcj"] Dec 11 08:29:28 crc kubenswrapper[4992]: I1211 08:29:28.513173 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" podUID="aecbdeff-a1da-48d6-8e01-fb0723c441f6" containerName="controller-manager" containerID="cri-o://a3ef74eb3a81f94eeb15b4bcbabde20fe3c495f58abf153e5b5a3640a3180d7c" gracePeriod=30 Dec 11 08:29:28 crc kubenswrapper[4992]: I1211 08:29:28.936502 4992 generic.go:334] "Generic (PLEG): container finished" podID="aecbdeff-a1da-48d6-8e01-fb0723c441f6" containerID="a3ef74eb3a81f94eeb15b4bcbabde20fe3c495f58abf153e5b5a3640a3180d7c" exitCode=0 Dec 11 08:29:28 crc kubenswrapper[4992]: I1211 08:29:28.936598 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" event={"ID":"aecbdeff-a1da-48d6-8e01-fb0723c441f6","Type":"ContainerDied","Data":"a3ef74eb3a81f94eeb15b4bcbabde20fe3c495f58abf153e5b5a3640a3180d7c"} Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.463018 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.618022 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-config\") pod \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.618091 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aecbdeff-a1da-48d6-8e01-fb0723c441f6-serving-cert\") pod \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.618171 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-client-ca\") pod \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.618196 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-proxy-ca-bundles\") pod \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.618247 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvdc6\" (UniqueName: \"kubernetes.io/projected/aecbdeff-a1da-48d6-8e01-fb0723c441f6-kube-api-access-rvdc6\") pod \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\" (UID: \"aecbdeff-a1da-48d6-8e01-fb0723c441f6\") " Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.619300 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-client-ca" (OuterVolumeSpecName: "client-ca") pod "aecbdeff-a1da-48d6-8e01-fb0723c441f6" (UID: "aecbdeff-a1da-48d6-8e01-fb0723c441f6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.619342 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aecbdeff-a1da-48d6-8e01-fb0723c441f6" (UID: "aecbdeff-a1da-48d6-8e01-fb0723c441f6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.619396 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-config" (OuterVolumeSpecName: "config") pod "aecbdeff-a1da-48d6-8e01-fb0723c441f6" (UID: "aecbdeff-a1da-48d6-8e01-fb0723c441f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.623975 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecbdeff-a1da-48d6-8e01-fb0723c441f6-kube-api-access-rvdc6" (OuterVolumeSpecName: "kube-api-access-rvdc6") pod "aecbdeff-a1da-48d6-8e01-fb0723c441f6" (UID: "aecbdeff-a1da-48d6-8e01-fb0723c441f6"). InnerVolumeSpecName "kube-api-access-rvdc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.624503 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecbdeff-a1da-48d6-8e01-fb0723c441f6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aecbdeff-a1da-48d6-8e01-fb0723c441f6" (UID: "aecbdeff-a1da-48d6-8e01-fb0723c441f6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.720070 4992 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aecbdeff-a1da-48d6-8e01-fb0723c441f6-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.720125 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.720136 4992 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.720150 4992 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aecbdeff-a1da-48d6-8e01-fb0723c441f6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.720165 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvdc6\" (UniqueName: \"kubernetes.io/projected/aecbdeff-a1da-48d6-8e01-fb0723c441f6-kube-api-access-rvdc6\") on node \"crc\" DevicePath \"\"" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.919260 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bbc96967c-v9rvg"] Dec 11 08:29:29 crc kubenswrapper[4992]: E1211 08:29:29.919552 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecbdeff-a1da-48d6-8e01-fb0723c441f6" containerName="controller-manager" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.919569 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecbdeff-a1da-48d6-8e01-fb0723c441f6" containerName="controller-manager" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.919724 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecbdeff-a1da-48d6-8e01-fb0723c441f6" containerName="controller-manager" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.920182 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.929485 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bbc96967c-v9rvg"] Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.946066 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" event={"ID":"aecbdeff-a1da-48d6-8e01-fb0723c441f6","Type":"ContainerDied","Data":"a05ce718d25516e8c7d293eb3224dac78671fd7c9617afc76f1c03e870183e00"} Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.946270 4992 scope.go:117] "RemoveContainer" containerID="a3ef74eb3a81f94eeb15b4bcbabde20fe3c495f58abf153e5b5a3640a3180d7c" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.946192 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77bc486b6-z5qcj" Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.983103 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z5qcj"] Dec 11 08:29:29 crc kubenswrapper[4992]: I1211 08:29:29.987352 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z5qcj"] Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.024722 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0106b1-ac50-4fc5-8aae-85f307002ece-client-ca\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.024811 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0106b1-ac50-4fc5-8aae-85f307002ece-serving-cert\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.024850 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0106b1-ac50-4fc5-8aae-85f307002ece-config\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.024909 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvszd\" (UniqueName: \"kubernetes.io/projected/ec0106b1-ac50-4fc5-8aae-85f307002ece-kube-api-access-pvszd\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.024937 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec0106b1-ac50-4fc5-8aae-85f307002ece-proxy-ca-bundles\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.109257 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecbdeff-a1da-48d6-8e01-fb0723c441f6" path="/var/lib/kubelet/pods/aecbdeff-a1da-48d6-8e01-fb0723c441f6/volumes" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.126012 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvszd\" (UniqueName: \"kubernetes.io/projected/ec0106b1-ac50-4fc5-8aae-85f307002ece-kube-api-access-pvszd\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.126070 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec0106b1-ac50-4fc5-8aae-85f307002ece-proxy-ca-bundles\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.126169 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0106b1-ac50-4fc5-8aae-85f307002ece-client-ca\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.126198 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0106b1-ac50-4fc5-8aae-85f307002ece-serving-cert\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.126238 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0106b1-ac50-4fc5-8aae-85f307002ece-config\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.127676 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec0106b1-ac50-4fc5-8aae-85f307002ece-client-ca\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.128038 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec0106b1-ac50-4fc5-8aae-85f307002ece-proxy-ca-bundles\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.128758 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0106b1-ac50-4fc5-8aae-85f307002ece-config\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.135314 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0106b1-ac50-4fc5-8aae-85f307002ece-serving-cert\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.149752 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvszd\" (UniqueName: \"kubernetes.io/projected/ec0106b1-ac50-4fc5-8aae-85f307002ece-kube-api-access-pvszd\") pod \"controller-manager-bbc96967c-v9rvg\" (UID: \"ec0106b1-ac50-4fc5-8aae-85f307002ece\") " pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.261909 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.459043 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bbc96967c-v9rvg"] Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.955762 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" event={"ID":"ec0106b1-ac50-4fc5-8aae-85f307002ece","Type":"ContainerStarted","Data":"7b3d99b1e0425f15358e832180101915a2423147a8cd25bcf1f77dbc3480e3e0"} Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.957288 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" event={"ID":"ec0106b1-ac50-4fc5-8aae-85f307002ece","Type":"ContainerStarted","Data":"e0fff706f0b7b15151122b61606d0d0d45b851e90a36db3715494dca13a220ea"} Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.957366 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.957592 4992 patch_prober.go:28] interesting pod/controller-manager-bbc96967c-v9rvg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.957695 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" podUID="ec0106b1-ac50-4fc5-8aae-85f307002ece" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Dec 11 08:29:30 crc kubenswrapper[4992]: I1211 08:29:30.981623 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" podStartSLOduration=2.981593155 podStartE2EDuration="2.981593155s" podCreationTimestamp="2025-12-11 08:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:29:30.980702533 +0000 UTC m=+395.240176459" watchObservedRunningTime="2025-12-11 08:29:30.981593155 +0000 UTC m=+395.241067081" Dec 11 08:29:31 crc kubenswrapper[4992]: I1211 08:29:31.966113 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bbc96967c-v9rvg" Dec 11 08:29:35 crc kubenswrapper[4992]: I1211 08:29:35.378568 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:29:35 crc kubenswrapper[4992]: I1211 08:29:35.378669 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:29:44 crc kubenswrapper[4992]: I1211 08:29:44.004796 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-87mdh" Dec 11 08:29:44 crc kubenswrapper[4992]: I1211 08:29:44.082012 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j86m7"] Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.168781 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg"] Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.170113 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.172474 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.175701 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.177732 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg"] Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.292807 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b54479f-8dc7-42ad-b2c5-993f72a43852-config-volume\") pod \"collect-profiles-29424030-qtgvg\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.292988 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-549xb\" (UniqueName: \"kubernetes.io/projected/7b54479f-8dc7-42ad-b2c5-993f72a43852-kube-api-access-549xb\") pod \"collect-profiles-29424030-qtgvg\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.293048 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b54479f-8dc7-42ad-b2c5-993f72a43852-secret-volume\") pod \"collect-profiles-29424030-qtgvg\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.393922 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b54479f-8dc7-42ad-b2c5-993f72a43852-secret-volume\") pod \"collect-profiles-29424030-qtgvg\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.394073 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b54479f-8dc7-42ad-b2c5-993f72a43852-config-volume\") pod \"collect-profiles-29424030-qtgvg\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.394109 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-549xb\" (UniqueName: \"kubernetes.io/projected/7b54479f-8dc7-42ad-b2c5-993f72a43852-kube-api-access-549xb\") pod \"collect-profiles-29424030-qtgvg\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.395264 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b54479f-8dc7-42ad-b2c5-993f72a43852-config-volume\") pod \"collect-profiles-29424030-qtgvg\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.407614 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b54479f-8dc7-42ad-b2c5-993f72a43852-secret-volume\") pod \"collect-profiles-29424030-qtgvg\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.413256 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-549xb\" (UniqueName: \"kubernetes.io/projected/7b54479f-8dc7-42ad-b2c5-993f72a43852-kube-api-access-549xb\") pod \"collect-profiles-29424030-qtgvg\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.490306 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:00 crc kubenswrapper[4992]: I1211 08:30:00.946225 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg"] Dec 11 08:30:01 crc kubenswrapper[4992]: I1211 08:30:01.177019 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" event={"ID":"7b54479f-8dc7-42ad-b2c5-993f72a43852","Type":"ContainerStarted","Data":"538dfed5e4b65afad2c5f8340b03bcc04531c22ada6b28162392e52f162534b7"} Dec 11 08:30:01 crc kubenswrapper[4992]: I1211 08:30:01.177064 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" event={"ID":"7b54479f-8dc7-42ad-b2c5-993f72a43852","Type":"ContainerStarted","Data":"0e90ec72cf397e20287bcf4a4857de3d59ff3e4eda67eec5c2f66229d00da458"} Dec 11 08:30:01 crc kubenswrapper[4992]: I1211 08:30:01.198247 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" podStartSLOduration=1.19822605 podStartE2EDuration="1.19822605s" podCreationTimestamp="2025-12-11 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:30:01.195738053 +0000 UTC m=+425.455211989" watchObservedRunningTime="2025-12-11 08:30:01.19822605 +0000 UTC m=+425.457699976" Dec 11 08:30:02 crc kubenswrapper[4992]: I1211 08:30:02.185326 4992 generic.go:334] "Generic (PLEG): container finished" podID="7b54479f-8dc7-42ad-b2c5-993f72a43852" containerID="538dfed5e4b65afad2c5f8340b03bcc04531c22ada6b28162392e52f162534b7" exitCode=0 Dec 11 08:30:02 crc kubenswrapper[4992]: I1211 08:30:02.185375 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" event={"ID":"7b54479f-8dc7-42ad-b2c5-993f72a43852","Type":"ContainerDied","Data":"538dfed5e4b65afad2c5f8340b03bcc04531c22ada6b28162392e52f162534b7"} Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.516113 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.646579 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-549xb\" (UniqueName: \"kubernetes.io/projected/7b54479f-8dc7-42ad-b2c5-993f72a43852-kube-api-access-549xb\") pod \"7b54479f-8dc7-42ad-b2c5-993f72a43852\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.646836 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b54479f-8dc7-42ad-b2c5-993f72a43852-config-volume\") pod \"7b54479f-8dc7-42ad-b2c5-993f72a43852\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.646867 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b54479f-8dc7-42ad-b2c5-993f72a43852-secret-volume\") pod \"7b54479f-8dc7-42ad-b2c5-993f72a43852\" (UID: \"7b54479f-8dc7-42ad-b2c5-993f72a43852\") " Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.647840 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b54479f-8dc7-42ad-b2c5-993f72a43852-config-volume" (OuterVolumeSpecName: "config-volume") pod "7b54479f-8dc7-42ad-b2c5-993f72a43852" (UID: "7b54479f-8dc7-42ad-b2c5-993f72a43852"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.653016 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b54479f-8dc7-42ad-b2c5-993f72a43852-kube-api-access-549xb" (OuterVolumeSpecName: "kube-api-access-549xb") pod "7b54479f-8dc7-42ad-b2c5-993f72a43852" (UID: "7b54479f-8dc7-42ad-b2c5-993f72a43852"). InnerVolumeSpecName "kube-api-access-549xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.658602 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b54479f-8dc7-42ad-b2c5-993f72a43852-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7b54479f-8dc7-42ad-b2c5-993f72a43852" (UID: "7b54479f-8dc7-42ad-b2c5-993f72a43852"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.748870 4992 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b54479f-8dc7-42ad-b2c5-993f72a43852-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.748914 4992 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b54479f-8dc7-42ad-b2c5-993f72a43852-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:03 crc kubenswrapper[4992]: I1211 08:30:03.748967 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-549xb\" (UniqueName: \"kubernetes.io/projected/7b54479f-8dc7-42ad-b2c5-993f72a43852-kube-api-access-549xb\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:04 crc kubenswrapper[4992]: I1211 08:30:04.197944 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" event={"ID":"7b54479f-8dc7-42ad-b2c5-993f72a43852","Type":"ContainerDied","Data":"0e90ec72cf397e20287bcf4a4857de3d59ff3e4eda67eec5c2f66229d00da458"} Dec 11 08:30:04 crc kubenswrapper[4992]: I1211 08:30:04.198313 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e90ec72cf397e20287bcf4a4857de3d59ff3e4eda67eec5c2f66229d00da458" Dec 11 08:30:04 crc kubenswrapper[4992]: I1211 08:30:04.198082 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg" Dec 11 08:30:05 crc kubenswrapper[4992]: I1211 08:30:05.378416 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:30:05 crc kubenswrapper[4992]: I1211 08:30:05.378502 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:30:05 crc kubenswrapper[4992]: I1211 08:30:05.378717 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:30:05 crc kubenswrapper[4992]: I1211 08:30:05.380378 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aff7b65415c21f93e15b1dea571d2abe79882b3fb99188a8013f1756d634527d"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 08:30:05 crc kubenswrapper[4992]: I1211 08:30:05.380480 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://aff7b65415c21f93e15b1dea571d2abe79882b3fb99188a8013f1756d634527d" gracePeriod=600 Dec 11 08:30:06 crc kubenswrapper[4992]: I1211 08:30:06.222863 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="aff7b65415c21f93e15b1dea571d2abe79882b3fb99188a8013f1756d634527d" exitCode=0 Dec 11 08:30:06 crc kubenswrapper[4992]: I1211 08:30:06.222931 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"aff7b65415c21f93e15b1dea571d2abe79882b3fb99188a8013f1756d634527d"} Dec 11 08:30:06 crc kubenswrapper[4992]: I1211 08:30:06.223239 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"c7dc8fd690f6db0535b70af4d93802ec848135dcfde017e2a96b74005cc0d3f8"} Dec 11 08:30:06 crc kubenswrapper[4992]: I1211 08:30:06.223263 4992 scope.go:117] "RemoveContainer" containerID="c1626c1e4cbb95ffd21e306b11ba60b31c48a9b100c08b6a9f32a15dcda375d2" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.147007 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" podUID="45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" containerName="registry" containerID="cri-o://eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab" gracePeriod=30 Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.551063 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.639720 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-bound-sa-token\") pod \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.639931 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-ca-trust-extracted\") pod \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.640005 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-trusted-ca\") pod \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.640159 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-certificates\") pod \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.640409 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-installation-pull-secrets\") pod \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.640468 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-tls\") pod \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.641293 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.641360 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.641469 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjbfd\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-kube-api-access-fjbfd\") pod \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\" (UID: \"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7\") " Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.641976 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.642297 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.653972 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.658381 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.659183 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.659274 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-kube-api-access-fjbfd" (OuterVolumeSpecName: "kube-api-access-fjbfd") pod "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7"). InnerVolumeSpecName "kube-api-access-fjbfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.659428 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.659862 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" (UID: "45b6c297-6ba3-4e90-b4a9-5f17c15f22f7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.743975 4992 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.744032 4992 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.744053 4992 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.744069 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjbfd\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-kube-api-access-fjbfd\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.744083 4992 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:09 crc kubenswrapper[4992]: I1211 08:30:09.744095 4992 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 08:30:10 crc kubenswrapper[4992]: I1211 08:30:10.252031 4992 generic.go:334] "Generic (PLEG): container finished" podID="45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" containerID="eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab" exitCode=0 Dec 11 08:30:10 crc kubenswrapper[4992]: I1211 08:30:10.252088 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" event={"ID":"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7","Type":"ContainerDied","Data":"eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab"} Dec 11 08:30:10 crc kubenswrapper[4992]: I1211 08:30:10.252158 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" event={"ID":"45b6c297-6ba3-4e90-b4a9-5f17c15f22f7","Type":"ContainerDied","Data":"9848e08e8d1a913d1002d8151ee71d8bcfd642f4bfe68c2aea386b663bba82c3"} Dec 11 08:30:10 crc kubenswrapper[4992]: I1211 08:30:10.252179 4992 scope.go:117] "RemoveContainer" containerID="eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab" Dec 11 08:30:10 crc kubenswrapper[4992]: I1211 08:30:10.252109 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-j86m7" Dec 11 08:30:10 crc kubenswrapper[4992]: I1211 08:30:10.271772 4992 scope.go:117] "RemoveContainer" containerID="eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab" Dec 11 08:30:10 crc kubenswrapper[4992]: E1211 08:30:10.272717 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab\": container with ID starting with eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab not found: ID does not exist" containerID="eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab" Dec 11 08:30:10 crc kubenswrapper[4992]: I1211 08:30:10.272756 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab"} err="failed to get container status \"eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab\": rpc error: code = NotFound desc = could not find container \"eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab\": container with ID starting with eb80ffe4858e9d6b461e26ae621581232f3faebea1b88a3aecf5941c4ed5efab not found: ID does not exist" Dec 11 08:30:10 crc kubenswrapper[4992]: I1211 08:30:10.275494 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j86m7"] Dec 11 08:30:10 crc kubenswrapper[4992]: I1211 08:30:10.278676 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-j86m7"] Dec 11 08:30:12 crc kubenswrapper[4992]: I1211 08:30:12.105194 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" path="/var/lib/kubelet/pods/45b6c297-6ba3-4e90-b4a9-5f17c15f22f7/volumes" Dec 11 08:32:05 crc kubenswrapper[4992]: I1211 08:32:05.378694 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:32:05 crc kubenswrapper[4992]: I1211 08:32:05.379542 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:32:35 crc kubenswrapper[4992]: I1211 08:32:35.382041 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:32:35 crc kubenswrapper[4992]: I1211 08:32:35.383295 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:33:05 crc kubenswrapper[4992]: I1211 08:33:05.379426 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:33:05 crc kubenswrapper[4992]: I1211 08:33:05.380355 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:33:05 crc kubenswrapper[4992]: I1211 08:33:05.380434 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:33:05 crc kubenswrapper[4992]: I1211 08:33:05.381547 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7dc8fd690f6db0535b70af4d93802ec848135dcfde017e2a96b74005cc0d3f8"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 08:33:05 crc kubenswrapper[4992]: I1211 08:33:05.381735 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://c7dc8fd690f6db0535b70af4d93802ec848135dcfde017e2a96b74005cc0d3f8" gracePeriod=600 Dec 11 08:33:06 crc kubenswrapper[4992]: I1211 08:33:06.410517 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="c7dc8fd690f6db0535b70af4d93802ec848135dcfde017e2a96b74005cc0d3f8" exitCode=0 Dec 11 08:33:06 crc kubenswrapper[4992]: I1211 08:33:06.410694 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"c7dc8fd690f6db0535b70af4d93802ec848135dcfde017e2a96b74005cc0d3f8"} Dec 11 08:33:06 crc kubenswrapper[4992]: I1211 08:33:06.411166 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"60689b85e9d0e4eef61ab75310d16d21a29edde0bcacd67f8fb3fabf7eaa5ca7"} Dec 11 08:33:06 crc kubenswrapper[4992]: I1211 08:33:06.411184 4992 scope.go:117] "RemoveContainer" containerID="aff7b65415c21f93e15b1dea571d2abe79882b3fb99188a8013f1756d634527d" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.570890 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wz2kx"] Dec 11 08:33:44 crc kubenswrapper[4992]: E1211 08:33:44.571896 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" containerName="registry" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.571918 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" containerName="registry" Dec 11 08:33:44 crc kubenswrapper[4992]: E1211 08:33:44.571957 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b54479f-8dc7-42ad-b2c5-993f72a43852" containerName="collect-profiles" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.571971 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b54479f-8dc7-42ad-b2c5-993f72a43852" containerName="collect-profiles" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.572159 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b6c297-6ba3-4e90-b4a9-5f17c15f22f7" containerName="registry" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.572184 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b54479f-8dc7-42ad-b2c5-993f72a43852" containerName="collect-profiles" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.572870 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-wz2kx" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.575120 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.577519 4992 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hshrj" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.581387 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-xsswz"] Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.583114 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.583172 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-xsswz" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.585165 4992 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5rjtf" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.593563 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-xsswz"] Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.598608 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wz2kx"] Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.603804 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-f6zvc"] Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.604607 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.606924 4992 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bv7b7" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.623510 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-f6zvc"] Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.720599 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9sh\" (UniqueName: \"kubernetes.io/projected/b8d49b68-c215-4f0e-a508-043a98247366-kube-api-access-jv9sh\") pod \"cert-manager-cainjector-7f985d654d-xsswz\" (UID: \"b8d49b68-c215-4f0e-a508-043a98247366\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-xsswz" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.720661 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9p5n\" (UniqueName: \"kubernetes.io/projected/02923a4e-c47d-47f9-8a4c-389310df14cb-kube-api-access-h9p5n\") pod \"cert-manager-5b446d88c5-wz2kx\" (UID: \"02923a4e-c47d-47f9-8a4c-389310df14cb\") " pod="cert-manager/cert-manager-5b446d88c5-wz2kx" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.720695 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpwc\" (UniqueName: \"kubernetes.io/projected/f92a18ba-a108-476e-a4f6-d7f4446b860a-kube-api-access-7jpwc\") pod \"cert-manager-webhook-5655c58dd6-f6zvc\" (UID: \"f92a18ba-a108-476e-a4f6-d7f4446b860a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.822176 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv9sh\" (UniqueName: \"kubernetes.io/projected/b8d49b68-c215-4f0e-a508-043a98247366-kube-api-access-jv9sh\") pod \"cert-manager-cainjector-7f985d654d-xsswz\" (UID: \"b8d49b68-c215-4f0e-a508-043a98247366\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-xsswz" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.822290 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9p5n\" (UniqueName: \"kubernetes.io/projected/02923a4e-c47d-47f9-8a4c-389310df14cb-kube-api-access-h9p5n\") pod \"cert-manager-5b446d88c5-wz2kx\" (UID: \"02923a4e-c47d-47f9-8a4c-389310df14cb\") " pod="cert-manager/cert-manager-5b446d88c5-wz2kx" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.822349 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jpwc\" (UniqueName: \"kubernetes.io/projected/f92a18ba-a108-476e-a4f6-d7f4446b860a-kube-api-access-7jpwc\") pod \"cert-manager-webhook-5655c58dd6-f6zvc\" (UID: \"f92a18ba-a108-476e-a4f6-d7f4446b860a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.848415 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv9sh\" (UniqueName: \"kubernetes.io/projected/b8d49b68-c215-4f0e-a508-043a98247366-kube-api-access-jv9sh\") pod \"cert-manager-cainjector-7f985d654d-xsswz\" (UID: \"b8d49b68-c215-4f0e-a508-043a98247366\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-xsswz" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.848880 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jpwc\" (UniqueName: \"kubernetes.io/projected/f92a18ba-a108-476e-a4f6-d7f4446b860a-kube-api-access-7jpwc\") pod \"cert-manager-webhook-5655c58dd6-f6zvc\" (UID: \"f92a18ba-a108-476e-a4f6-d7f4446b860a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.849144 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9p5n\" (UniqueName: \"kubernetes.io/projected/02923a4e-c47d-47f9-8a4c-389310df14cb-kube-api-access-h9p5n\") pod \"cert-manager-5b446d88c5-wz2kx\" (UID: \"02923a4e-c47d-47f9-8a4c-389310df14cb\") " pod="cert-manager/cert-manager-5b446d88c5-wz2kx" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.892860 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-wz2kx" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.899413 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-xsswz" Dec 11 08:33:44 crc kubenswrapper[4992]: I1211 08:33:44.926104 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" Dec 11 08:33:45 crc kubenswrapper[4992]: I1211 08:33:45.117541 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-wz2kx"] Dec 11 08:33:45 crc kubenswrapper[4992]: I1211 08:33:45.136070 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 08:33:45 crc kubenswrapper[4992]: I1211 08:33:45.164273 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-f6zvc"] Dec 11 08:33:45 crc kubenswrapper[4992]: W1211 08:33:45.169815 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf92a18ba_a108_476e_a4f6_d7f4446b860a.slice/crio-bbaf63263d26148a0913a5189f7c47a9e36f2296aa252f73c125c93cae230c1f WatchSource:0}: Error finding container bbaf63263d26148a0913a5189f7c47a9e36f2296aa252f73c125c93cae230c1f: Status 404 returned error can't find the container with id bbaf63263d26148a0913a5189f7c47a9e36f2296aa252f73c125c93cae230c1f Dec 11 08:33:45 crc kubenswrapper[4992]: I1211 08:33:45.188771 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-xsswz"] Dec 11 08:33:45 crc kubenswrapper[4992]: W1211 08:33:45.197369 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8d49b68_c215_4f0e_a508_043a98247366.slice/crio-6e7a6ea65e659c9aee915d36dd2ca2ff6f82a5f0d0fbfcce5e0021a182e14740 WatchSource:0}: Error finding container 6e7a6ea65e659c9aee915d36dd2ca2ff6f82a5f0d0fbfcce5e0021a182e14740: Status 404 returned error can't find the container with id 6e7a6ea65e659c9aee915d36dd2ca2ff6f82a5f0d0fbfcce5e0021a182e14740 Dec 11 08:33:45 crc kubenswrapper[4992]: I1211 08:33:45.780646 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" event={"ID":"f92a18ba-a108-476e-a4f6-d7f4446b860a","Type":"ContainerStarted","Data":"bbaf63263d26148a0913a5189f7c47a9e36f2296aa252f73c125c93cae230c1f"} Dec 11 08:33:45 crc kubenswrapper[4992]: I1211 08:33:45.781733 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-xsswz" event={"ID":"b8d49b68-c215-4f0e-a508-043a98247366","Type":"ContainerStarted","Data":"6e7a6ea65e659c9aee915d36dd2ca2ff6f82a5f0d0fbfcce5e0021a182e14740"} Dec 11 08:33:45 crc kubenswrapper[4992]: I1211 08:33:45.782669 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-wz2kx" event={"ID":"02923a4e-c47d-47f9-8a4c-389310df14cb","Type":"ContainerStarted","Data":"a55cb42b8228c09471d809eab105ca36cbee80acb621c6ae7f30094fd33868b1"} Dec 11 08:33:48 crc kubenswrapper[4992]: I1211 08:33:48.800612 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-xsswz" event={"ID":"b8d49b68-c215-4f0e-a508-043a98247366","Type":"ContainerStarted","Data":"3a02860267c203957459fe807dd26ce5040b0804cc412c6c129878f7adc17da9"} Dec 11 08:33:48 crc kubenswrapper[4992]: I1211 08:33:48.802617 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-wz2kx" event={"ID":"02923a4e-c47d-47f9-8a4c-389310df14cb","Type":"ContainerStarted","Data":"6d0a3e1f7df5c919a28e29cb799891e0cc7958cf55cbdaa438210de8fb9012ba"} Dec 11 08:33:48 crc kubenswrapper[4992]: I1211 08:33:48.804006 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" event={"ID":"f92a18ba-a108-476e-a4f6-d7f4446b860a","Type":"ContainerStarted","Data":"cc88e068672ca5bb75897d13ff65c135fe2e1cdab936f611acb4b900e5ee7aae"} Dec 11 08:33:48 crc kubenswrapper[4992]: I1211 08:33:48.804232 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" Dec 11 08:33:48 crc kubenswrapper[4992]: I1211 08:33:48.820083 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-xsswz" podStartSLOduration=1.808880569 podStartE2EDuration="4.820064463s" podCreationTimestamp="2025-12-11 08:33:44 +0000 UTC" firstStartedPulling="2025-12-11 08:33:45.200212898 +0000 UTC m=+649.459686824" lastFinishedPulling="2025-12-11 08:33:48.211396792 +0000 UTC m=+652.470870718" observedRunningTime="2025-12-11 08:33:48.816574886 +0000 UTC m=+653.076048822" watchObservedRunningTime="2025-12-11 08:33:48.820064463 +0000 UTC m=+653.079538389" Dec 11 08:33:48 crc kubenswrapper[4992]: I1211 08:33:48.838464 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-wz2kx" podStartSLOduration=1.743424133 podStartE2EDuration="4.838442885s" podCreationTimestamp="2025-12-11 08:33:44 +0000 UTC" firstStartedPulling="2025-12-11 08:33:45.134553717 +0000 UTC m=+649.394027643" lastFinishedPulling="2025-12-11 08:33:48.229572459 +0000 UTC m=+652.489046395" observedRunningTime="2025-12-11 08:33:48.836268491 +0000 UTC m=+653.095742417" watchObservedRunningTime="2025-12-11 08:33:48.838442885 +0000 UTC m=+653.097916811" Dec 11 08:33:54 crc kubenswrapper[4992]: I1211 08:33:54.930968 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" Dec 11 08:33:54 crc kubenswrapper[4992]: I1211 08:33:54.955148 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-f6zvc" podStartSLOduration=7.925067346 podStartE2EDuration="10.955114193s" podCreationTimestamp="2025-12-11 08:33:44 +0000 UTC" firstStartedPulling="2025-12-11 08:33:45.174020535 +0000 UTC m=+649.433494461" lastFinishedPulling="2025-12-11 08:33:48.204067382 +0000 UTC m=+652.463541308" observedRunningTime="2025-12-11 08:33:48.865215934 +0000 UTC m=+653.124689880" watchObservedRunningTime="2025-12-11 08:33:54.955114193 +0000 UTC m=+659.214588159" Dec 11 08:34:14 crc kubenswrapper[4992]: I1211 08:34:14.752877 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fbd2b"] Dec 11 08:34:14 crc kubenswrapper[4992]: I1211 08:34:14.753854 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovn-controller" containerID="cri-o://f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd" gracePeriod=30 Dec 11 08:34:14 crc kubenswrapper[4992]: I1211 08:34:14.754301 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="sbdb" containerID="cri-o://45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" gracePeriod=30 Dec 11 08:34:14 crc kubenswrapper[4992]: I1211 08:34:14.754349 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="nbdb" containerID="cri-o://06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" gracePeriod=30 Dec 11 08:34:14 crc kubenswrapper[4992]: I1211 08:34:14.754393 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="northd" containerID="cri-o://b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420" gracePeriod=30 Dec 11 08:34:14 crc kubenswrapper[4992]: I1211 08:34:14.754428 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c" gracePeriod=30 Dec 11 08:34:14 crc kubenswrapper[4992]: I1211 08:34:14.754481 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kube-rbac-proxy-node" containerID="cri-o://4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4" gracePeriod=30 Dec 11 08:34:14 crc kubenswrapper[4992]: I1211 08:34:14.754520 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovn-acl-logging" containerID="cri-o://5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e" gracePeriod=30 Dec 11 08:34:14 crc kubenswrapper[4992]: I1211 08:34:14.792384 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" containerID="cri-o://88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580" gracePeriod=30 Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.418392 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f is running failed: container process not found" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.418409 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac is running failed: container process not found" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.418887 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac is running failed: container process not found" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.418878 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f is running failed: container process not found" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.419184 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac is running failed: container process not found" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.419230 4992 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="sbdb" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.419335 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f is running failed: container process not found" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.419376 4992 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="nbdb" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.706306 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/3.log" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.709728 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovn-acl-logging/0.log" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.710274 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovn-controller/0.log" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.710872 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.773658 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-v9hkf"] Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.773906 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.773921 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.773933 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.773944 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.773957 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="nbdb" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.773965 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="nbdb" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.773975 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kube-rbac-proxy-node" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.773983 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kube-rbac-proxy-node" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.774003 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kubecfg-setup" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774011 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kubecfg-setup" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.774026 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774034 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.774045 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="northd" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774053 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="northd" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.774066 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovn-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774075 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovn-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.774086 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774094 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.774103 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovn-acl-logging" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774111 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovn-acl-logging" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.774128 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="sbdb" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774136 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="sbdb" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774251 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovn-acl-logging" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774269 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774278 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774289 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="nbdb" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774301 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774312 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774321 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kube-rbac-proxy-node" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774333 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovn-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774343 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774356 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="northd" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774364 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="sbdb" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.774492 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774503 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.774521 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774529 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.774666 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerName="ovnkube-controller" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.776722 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840261 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-netd\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840321 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovn-node-metrics-cert\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840354 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-var-lib-cni-networks-ovn-kubernetes\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840381 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-log-socket\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840379 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840413 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-config\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840443 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-netns\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840459 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-ovn-kubernetes\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840478 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-ovn\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840492 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-etc-openvswitch\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840497 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840525 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-env-overrides\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840537 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840564 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840529 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-log-socket" (OuterVolumeSpecName: "log-socket") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840555 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-node-log\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840586 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-node-log" (OuterVolumeSpecName: "node-log") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840624 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840587 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840707 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-systemd\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840734 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-systemd-units\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840785 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-kubelet\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840812 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840821 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-bin\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840843 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840870 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840886 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-script-lib\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840911 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-slash\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840937 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-var-lib-openvswitch\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840958 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-openvswitch\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840967 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840983 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k9hp\" (UniqueName: \"kubernetes.io/projected/216d94db-3002-48a3-b3c2-2a3201f4d6cd-kube-api-access-6k9hp\") pod \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\" (UID: \"216d94db-3002-48a3-b3c2-2a3201f4d6cd\") " Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.840981 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841003 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841005 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-slash" (OuterVolumeSpecName: "host-slash") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841025 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841162 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6dw\" (UniqueName: \"kubernetes.io/projected/9f5c6ba9-e921-403a-a170-ebb9dd545261-kube-api-access-4w6dw\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841244 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-log-socket\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841274 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-etc-openvswitch\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841300 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-cni-bin\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841396 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841325 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-run-systemd\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841465 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-run-netns\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841512 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-kubelet\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841577 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-run-openvswitch\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841709 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-run-ovn\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841733 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f5c6ba9-e921-403a-a170-ebb9dd545261-ovnkube-script-lib\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841752 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f5c6ba9-e921-403a-a170-ebb9dd545261-ovn-node-metrics-cert\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841841 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-node-log\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841875 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-var-lib-openvswitch\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841905 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f5c6ba9-e921-403a-a170-ebb9dd545261-ovnkube-config\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841935 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-systemd-units\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841963 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f5c6ba9-e921-403a-a170-ebb9dd545261-env-overrides\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.841985 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-run-ovn-kubernetes\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842020 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-cni-netd\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842111 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-slash\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842140 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842253 4992 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842285 4992 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842302 4992 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-node-log\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842316 4992 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842326 4992 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842334 4992 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842344 4992 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842358 4992 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-slash\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842369 4992 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842379 4992 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842388 4992 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842397 4992 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842407 4992 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-log-socket\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842418 4992 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842430 4992 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842439 4992 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.842448 4992 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.846088 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.846286 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216d94db-3002-48a3-b3c2-2a3201f4d6cd-kube-api-access-6k9hp" (OuterVolumeSpecName: "kube-api-access-6k9hp") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "kube-api-access-6k9hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.855362 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "216d94db-3002-48a3-b3c2-2a3201f4d6cd" (UID: "216d94db-3002-48a3-b3c2-2a3201f4d6cd"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943596 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f5c6ba9-e921-403a-a170-ebb9dd545261-ovnkube-config\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943676 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-systemd-units\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943704 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f5c6ba9-e921-403a-a170-ebb9dd545261-env-overrides\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943770 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-run-ovn-kubernetes\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943799 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-cni-netd\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943851 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943874 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-slash\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943899 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w6dw\" (UniqueName: \"kubernetes.io/projected/9f5c6ba9-e921-403a-a170-ebb9dd545261-kube-api-access-4w6dw\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943963 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-log-socket\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.943983 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-etc-openvswitch\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944031 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-cni-bin\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944051 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-run-systemd\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944100 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-run-netns\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944126 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-kubelet\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944180 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-run-openvswitch\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944232 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-run-ovn\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944279 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f5c6ba9-e921-403a-a170-ebb9dd545261-ovnkube-script-lib\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944302 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f5c6ba9-e921-403a-a170-ebb9dd545261-ovn-node-metrics-cert\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944366 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-node-log\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944393 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-var-lib-openvswitch\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944475 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k9hp\" (UniqueName: \"kubernetes.io/projected/216d94db-3002-48a3-b3c2-2a3201f4d6cd-kube-api-access-6k9hp\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944519 4992 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/216d94db-3002-48a3-b3c2-2a3201f4d6cd-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944535 4992 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/216d94db-3002-48a3-b3c2-2a3201f4d6cd-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944605 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-var-lib-openvswitch\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944701 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-systemd-units\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.944953 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f5c6ba9-e921-403a-a170-ebb9dd545261-ovnkube-config\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945034 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-cni-bin\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945070 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-run-ovn-kubernetes\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945104 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-cni-netd\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945136 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945169 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-slash\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945484 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-log-socket\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945528 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-etc-openvswitch\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945557 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-run-openvswitch\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945587 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-run-systemd\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945613 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-run-netns\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.945665 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-host-kubelet\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.946222 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f5c6ba9-e921-403a-a170-ebb9dd545261-ovnkube-script-lib\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.946276 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-run-ovn\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.946755 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f5c6ba9-e921-403a-a170-ebb9dd545261-node-log\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.947150 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f5c6ba9-e921-403a-a170-ebb9dd545261-env-overrides\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.956042 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f5c6ba9-e921-403a-a170-ebb9dd545261-ovn-node-metrics-cert\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.966500 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w6dw\" (UniqueName: \"kubernetes.io/projected/9f5c6ba9-e921-403a-a170-ebb9dd545261-kube-api-access-4w6dw\") pod \"ovnkube-node-v9hkf\" (UID: \"9f5c6ba9-e921-403a-a170-ebb9dd545261\") " pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.976492 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovnkube-controller/3.log" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.978969 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovn-acl-logging/0.log" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979433 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fbd2b_216d94db-3002-48a3-b3c2-2a3201f4d6cd/ovn-controller/0.log" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979806 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580" exitCode=0 Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979839 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" exitCode=0 Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979849 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" exitCode=0 Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979862 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420" exitCode=0 Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979875 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c" exitCode=0 Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979888 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4" exitCode=0 Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979901 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e" exitCode=143 Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979915 4992 generic.go:334] "Generic (PLEG): container finished" podID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" containerID="f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd" exitCode=143 Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979963 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.979999 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980017 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980033 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980049 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980065 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980081 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980096 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980105 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980114 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980123 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980132 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980141 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980149 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980159 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980172 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980185 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980195 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980204 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980213 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980222 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980230 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980239 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980247 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980256 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980265 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980277 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980290 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980300 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980309 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980318 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980327 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980335 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980344 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980354 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980362 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980371 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980383 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" event={"ID":"216d94db-3002-48a3-b3c2-2a3201f4d6cd","Type":"ContainerDied","Data":"5c0e5e5760bb8408585572087aa2a9fc777c22071fca5224966c934afe3720fc"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980398 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980408 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980417 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980426 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980435 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980444 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980453 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980461 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980469 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980479 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.980497 4992 scope.go:117] "RemoveContainer" containerID="88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.986943 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fbd2b" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.995050 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/2.log" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.995819 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/1.log" Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.995857 4992 generic.go:334] "Generic (PLEG): container finished" podID="5838adfc-502f-44ac-be33-14f964497c4f" containerID="88cd5d23fc1cf16747d24d55152252142e221cc14a1a4fb4bb157a484b76bd2c" exitCode=2 Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.995886 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lglcz" event={"ID":"5838adfc-502f-44ac-be33-14f964497c4f","Type":"ContainerDied","Data":"88cd5d23fc1cf16747d24d55152252142e221cc14a1a4fb4bb157a484b76bd2c"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.995907 4992 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329"} Dec 11 08:34:15 crc kubenswrapper[4992]: I1211 08:34:15.997088 4992 scope.go:117] "RemoveContainer" containerID="88cd5d23fc1cf16747d24d55152252142e221cc14a1a4fb4bb157a484b76bd2c" Dec 11 08:34:15 crc kubenswrapper[4992]: E1211 08:34:15.997692 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lglcz_openshift-multus(5838adfc-502f-44ac-be33-14f964497c4f)\"" pod="openshift-multus/multus-lglcz" podUID="5838adfc-502f-44ac-be33-14f964497c4f" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.014559 4992 scope.go:117] "RemoveContainer" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.034237 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fbd2b"] Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.037245 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fbd2b"] Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.050163 4992 scope.go:117] "RemoveContainer" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.064812 4992 scope.go:117] "RemoveContainer" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.077919 4992 scope.go:117] "RemoveContainer" containerID="b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.092266 4992 scope.go:117] "RemoveContainer" containerID="f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.099545 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.104086 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216d94db-3002-48a3-b3c2-2a3201f4d6cd" path="/var/lib/kubelet/pods/216d94db-3002-48a3-b3c2-2a3201f4d6cd/volumes" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.109754 4992 scope.go:117] "RemoveContainer" containerID="4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.126543 4992 scope.go:117] "RemoveContainer" containerID="5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.149523 4992 scope.go:117] "RemoveContainer" containerID="f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.169105 4992 scope.go:117] "RemoveContainer" containerID="7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.185803 4992 scope.go:117] "RemoveContainer" containerID="88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.186337 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": container with ID starting with 88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580 not found: ID does not exist" containerID="88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.186406 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} err="failed to get container status \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": rpc error: code = NotFound desc = could not find container \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": container with ID starting with 88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.186455 4992 scope.go:117] "RemoveContainer" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.186936 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\": container with ID starting with 5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404 not found: ID does not exist" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.186978 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} err="failed to get container status \"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\": rpc error: code = NotFound desc = could not find container \"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\": container with ID starting with 5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.187007 4992 scope.go:117] "RemoveContainer" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.187327 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\": container with ID starting with 45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac not found: ID does not exist" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.187388 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} err="failed to get container status \"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\": rpc error: code = NotFound desc = could not find container \"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\": container with ID starting with 45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.187428 4992 scope.go:117] "RemoveContainer" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.187996 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\": container with ID starting with 06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f not found: ID does not exist" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.188021 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} err="failed to get container status \"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\": rpc error: code = NotFound desc = could not find container \"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\": container with ID starting with 06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.188036 4992 scope.go:117] "RemoveContainer" containerID="b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.188356 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\": container with ID starting with b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420 not found: ID does not exist" containerID="b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.188395 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} err="failed to get container status \"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\": rpc error: code = NotFound desc = could not find container \"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\": container with ID starting with b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.188417 4992 scope.go:117] "RemoveContainer" containerID="f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.188710 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\": container with ID starting with f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c not found: ID does not exist" containerID="f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.188777 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} err="failed to get container status \"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\": rpc error: code = NotFound desc = could not find container \"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\": container with ID starting with f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.188804 4992 scope.go:117] "RemoveContainer" containerID="4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.189225 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\": container with ID starting with 4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4 not found: ID does not exist" containerID="4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.189251 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} err="failed to get container status \"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\": rpc error: code = NotFound desc = could not find container \"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\": container with ID starting with 4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.189265 4992 scope.go:117] "RemoveContainer" containerID="5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.189799 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\": container with ID starting with 5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e not found: ID does not exist" containerID="5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.189825 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} err="failed to get container status \"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\": rpc error: code = NotFound desc = could not find container \"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\": container with ID starting with 5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.189844 4992 scope.go:117] "RemoveContainer" containerID="f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.190295 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\": container with ID starting with f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd not found: ID does not exist" containerID="f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.190319 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} err="failed to get container status \"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\": rpc error: code = NotFound desc = could not find container \"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\": container with ID starting with f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.190336 4992 scope.go:117] "RemoveContainer" containerID="7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a" Dec 11 08:34:16 crc kubenswrapper[4992]: E1211 08:34:16.190597 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\": container with ID starting with 7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a not found: ID does not exist" containerID="7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.190806 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a"} err="failed to get container status \"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\": rpc error: code = NotFound desc = could not find container \"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\": container with ID starting with 7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.190833 4992 scope.go:117] "RemoveContainer" containerID="88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.191141 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} err="failed to get container status \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": rpc error: code = NotFound desc = could not find container \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": container with ID starting with 88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.191183 4992 scope.go:117] "RemoveContainer" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.191492 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} err="failed to get container status \"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\": rpc error: code = NotFound desc = could not find container \"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\": container with ID starting with 5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.191518 4992 scope.go:117] "RemoveContainer" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.191773 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} err="failed to get container status \"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\": rpc error: code = NotFound desc = could not find container \"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\": container with ID starting with 45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.191801 4992 scope.go:117] "RemoveContainer" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.192026 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} err="failed to get container status \"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\": rpc error: code = NotFound desc = could not find container \"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\": container with ID starting with 06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.192055 4992 scope.go:117] "RemoveContainer" containerID="b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.192313 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} err="failed to get container status \"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\": rpc error: code = NotFound desc = could not find container \"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\": container with ID starting with b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.192367 4992 scope.go:117] "RemoveContainer" containerID="f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.192680 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} err="failed to get container status \"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\": rpc error: code = NotFound desc = could not find container \"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\": container with ID starting with f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.192709 4992 scope.go:117] "RemoveContainer" containerID="4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.192998 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} err="failed to get container status \"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\": rpc error: code = NotFound desc = could not find container \"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\": container with ID starting with 4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.193028 4992 scope.go:117] "RemoveContainer" containerID="5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.193309 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} err="failed to get container status \"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\": rpc error: code = NotFound desc = could not find container \"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\": container with ID starting with 5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.193345 4992 scope.go:117] "RemoveContainer" containerID="f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.193620 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} err="failed to get container status \"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\": rpc error: code = NotFound desc = could not find container \"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\": container with ID starting with f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.193665 4992 scope.go:117] "RemoveContainer" containerID="7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.193940 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a"} err="failed to get container status \"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\": rpc error: code = NotFound desc = could not find container \"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\": container with ID starting with 7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.193965 4992 scope.go:117] "RemoveContainer" containerID="88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.194247 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} err="failed to get container status \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": rpc error: code = NotFound desc = could not find container \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": container with ID starting with 88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.194276 4992 scope.go:117] "RemoveContainer" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.194596 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} err="failed to get container status \"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\": rpc error: code = NotFound desc = could not find container \"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\": container with ID starting with 5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.194649 4992 scope.go:117] "RemoveContainer" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.195058 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} err="failed to get container status \"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\": rpc error: code = NotFound desc = could not find container \"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\": container with ID starting with 45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.195082 4992 scope.go:117] "RemoveContainer" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.195375 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} err="failed to get container status \"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\": rpc error: code = NotFound desc = could not find container \"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\": container with ID starting with 06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.195403 4992 scope.go:117] "RemoveContainer" containerID="b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.195705 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} err="failed to get container status \"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\": rpc error: code = NotFound desc = could not find container \"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\": container with ID starting with b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.195761 4992 scope.go:117] "RemoveContainer" containerID="f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.196028 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} err="failed to get container status \"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\": rpc error: code = NotFound desc = could not find container \"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\": container with ID starting with f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.196052 4992 scope.go:117] "RemoveContainer" containerID="4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.196281 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} err="failed to get container status \"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\": rpc error: code = NotFound desc = could not find container \"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\": container with ID starting with 4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.196305 4992 scope.go:117] "RemoveContainer" containerID="5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.196594 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} err="failed to get container status \"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\": rpc error: code = NotFound desc = could not find container \"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\": container with ID starting with 5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.196618 4992 scope.go:117] "RemoveContainer" containerID="f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.196933 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} err="failed to get container status \"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\": rpc error: code = NotFound desc = could not find container \"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\": container with ID starting with f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.196962 4992 scope.go:117] "RemoveContainer" containerID="7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.197208 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a"} err="failed to get container status \"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\": rpc error: code = NotFound desc = could not find container \"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\": container with ID starting with 7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.197243 4992 scope.go:117] "RemoveContainer" containerID="88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.197542 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} err="failed to get container status \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": rpc error: code = NotFound desc = could not find container \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": container with ID starting with 88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.197572 4992 scope.go:117] "RemoveContainer" containerID="5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.197866 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404"} err="failed to get container status \"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\": rpc error: code = NotFound desc = could not find container \"5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404\": container with ID starting with 5a23791eba8520fa8dd2321080132f9ecdc9d15f08b125e26acfc3c38199c404 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.197889 4992 scope.go:117] "RemoveContainer" containerID="45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.198134 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac"} err="failed to get container status \"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\": rpc error: code = NotFound desc = could not find container \"45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac\": container with ID starting with 45a46db9b5cb1bd0a0a9e7e6e93a6753a65a4e4fcee4f0007f317a9314a721ac not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.198153 4992 scope.go:117] "RemoveContainer" containerID="06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.198426 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f"} err="failed to get container status \"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\": rpc error: code = NotFound desc = could not find container \"06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f\": container with ID starting with 06d55854908bd20f09ba984239c3ff09ff3908ef1b3477bf4b78b65ca1ba084f not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.198448 4992 scope.go:117] "RemoveContainer" containerID="b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.198733 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420"} err="failed to get container status \"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\": rpc error: code = NotFound desc = could not find container \"b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420\": container with ID starting with b9482212c7731f25a979ed5875a552e07529649013f4ec7ee358a7d9aae68420 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.198753 4992 scope.go:117] "RemoveContainer" containerID="f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.198987 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c"} err="failed to get container status \"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\": rpc error: code = NotFound desc = could not find container \"f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c\": container with ID starting with f4a1117b8f393e2ab005f576a72eda6961e096a6b8878ca1d26ec65ca226015c not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.199004 4992 scope.go:117] "RemoveContainer" containerID="4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.199250 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4"} err="failed to get container status \"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\": rpc error: code = NotFound desc = could not find container \"4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4\": container with ID starting with 4040bc22d22d63989e66fd26748feae2b523ae0ea6d586fd64ba7d42eedebcf4 not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.199276 4992 scope.go:117] "RemoveContainer" containerID="5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.199518 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e"} err="failed to get container status \"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\": rpc error: code = NotFound desc = could not find container \"5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e\": container with ID starting with 5d495b344e496ba12faa93ad330af01255767f20bf46601cfc5f93c4bcafde3e not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.199539 4992 scope.go:117] "RemoveContainer" containerID="f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.199817 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd"} err="failed to get container status \"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\": rpc error: code = NotFound desc = could not find container \"f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd\": container with ID starting with f6b72e5166aa746f2df31c4872b73130aad12fafb8c22c19331efac1fa7fffcd not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.199837 4992 scope.go:117] "RemoveContainer" containerID="7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.200107 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a"} err="failed to get container status \"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\": rpc error: code = NotFound desc = could not find container \"7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a\": container with ID starting with 7f393797ec19048d68cc550028cda0c872e8416a91920d71f2c5b7aef08f5d3a not found: ID does not exist" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.200129 4992 scope.go:117] "RemoveContainer" containerID="88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580" Dec 11 08:34:16 crc kubenswrapper[4992]: I1211 08:34:16.200398 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580"} err="failed to get container status \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": rpc error: code = NotFound desc = could not find container \"88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580\": container with ID starting with 88ce26056e722749fa5bed68f5cdee4ee2c812e1f2ced29bd6dd8fe9ce9a5580 not found: ID does not exist" Dec 11 08:34:17 crc kubenswrapper[4992]: I1211 08:34:17.002798 4992 generic.go:334] "Generic (PLEG): container finished" podID="9f5c6ba9-e921-403a-a170-ebb9dd545261" containerID="21c8950995158aa9ae6b028225a13518e1df3cc84b55e66be0739bd7fa003021" exitCode=0 Dec 11 08:34:17 crc kubenswrapper[4992]: I1211 08:34:17.002885 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerDied","Data":"21c8950995158aa9ae6b028225a13518e1df3cc84b55e66be0739bd7fa003021"} Dec 11 08:34:17 crc kubenswrapper[4992]: I1211 08:34:17.003254 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerStarted","Data":"ad1ae5aa46e470bfe08c7438cf86ebe47e3cbc2dfdf0862dff6a7c057b5e9a8c"} Dec 11 08:34:18 crc kubenswrapper[4992]: I1211 08:34:18.014239 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerStarted","Data":"717d552c84b4aeb39acb2c2c9b0760634c432ec29d5160db62fa0e5123ecf4f2"} Dec 11 08:34:18 crc kubenswrapper[4992]: I1211 08:34:18.014694 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerStarted","Data":"78262f8e0c6e362e40cc99c8af3f46dca529fa0878106e81f591e0c416b6a198"} Dec 11 08:34:18 crc kubenswrapper[4992]: I1211 08:34:18.014727 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerStarted","Data":"36fd35d6c6410c0ae32b9f60b4711bec12a461c8118fdab4852a863a31384e84"} Dec 11 08:34:18 crc kubenswrapper[4992]: I1211 08:34:18.014752 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerStarted","Data":"5f355f72b6ebe9fbfb6f5067b9e5c4fa66bd904ba8d75db08881a9f01489cdaa"} Dec 11 08:34:19 crc kubenswrapper[4992]: I1211 08:34:19.033532 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerStarted","Data":"f2105aafd7f33c80f348d214528292af4502beaace0f046d2cacc2880fc11937"} Dec 11 08:34:19 crc kubenswrapper[4992]: I1211 08:34:19.033601 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerStarted","Data":"3953d9618ec24f16e7b6bd9157eb024607080749afed9b5bcc882c57ba386075"} Dec 11 08:34:22 crc kubenswrapper[4992]: I1211 08:34:22.054450 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerStarted","Data":"0675fe79110b4c77667b495dd7cd390d4a64409b7eecc5734d59533b6cc171f3"} Dec 11 08:34:25 crc kubenswrapper[4992]: I1211 08:34:25.079819 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" event={"ID":"9f5c6ba9-e921-403a-a170-ebb9dd545261","Type":"ContainerStarted","Data":"5d1e073b5f5c79bc12e46a8caf806192c7c69ed08dfc715d4a52c026f4e4c5bf"} Dec 11 08:34:25 crc kubenswrapper[4992]: I1211 08:34:25.080565 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:25 crc kubenswrapper[4992]: I1211 08:34:25.080651 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:25 crc kubenswrapper[4992]: I1211 08:34:25.080687 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:25 crc kubenswrapper[4992]: I1211 08:34:25.115591 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" podStartSLOduration=10.115573772 podStartE2EDuration="10.115573772s" podCreationTimestamp="2025-12-11 08:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:34:25.114843795 +0000 UTC m=+689.374317731" watchObservedRunningTime="2025-12-11 08:34:25.115573772 +0000 UTC m=+689.375047708" Dec 11 08:34:25 crc kubenswrapper[4992]: I1211 08:34:25.118131 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:25 crc kubenswrapper[4992]: I1211 08:34:25.123200 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:31 crc kubenswrapper[4992]: I1211 08:34:31.095345 4992 scope.go:117] "RemoveContainer" containerID="88cd5d23fc1cf16747d24d55152252142e221cc14a1a4fb4bb157a484b76bd2c" Dec 11 08:34:31 crc kubenswrapper[4992]: E1211 08:34:31.096212 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lglcz_openshift-multus(5838adfc-502f-44ac-be33-14f964497c4f)\"" pod="openshift-multus/multus-lglcz" podUID="5838adfc-502f-44ac-be33-14f964497c4f" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.312972 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr"] Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.314547 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.316943 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.325982 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr"] Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.370914 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.371011 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdwkq\" (UniqueName: \"kubernetes.io/projected/12e967da-ba1b-419e-ae17-80b2f60a3300-kube-api-access-kdwkq\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.371045 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.473155 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdwkq\" (UniqueName: \"kubernetes.io/projected/12e967da-ba1b-419e-ae17-80b2f60a3300-kube-api-access-kdwkq\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.473248 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.473299 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.473791 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.474046 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.509073 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdwkq\" (UniqueName: \"kubernetes.io/projected/12e967da-ba1b-419e-ae17-80b2f60a3300-kube-api-access-kdwkq\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: I1211 08:34:34.636125 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: E1211 08:34:34.673616 4992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace_12e967da-ba1b-419e-ae17-80b2f60a3300_0(1bab906f7e158e9c0844318fc556e97e47210810e4f55a9aeffd58b3ae2ae28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 08:34:34 crc kubenswrapper[4992]: E1211 08:34:34.673702 4992 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace_12e967da-ba1b-419e-ae17-80b2f60a3300_0(1bab906f7e158e9c0844318fc556e97e47210810e4f55a9aeffd58b3ae2ae28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: E1211 08:34:34.673732 4992 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace_12e967da-ba1b-419e-ae17-80b2f60a3300_0(1bab906f7e158e9c0844318fc556e97e47210810e4f55a9aeffd58b3ae2ae28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:34 crc kubenswrapper[4992]: E1211 08:34:34.673808 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace(12e967da-ba1b-419e-ae17-80b2f60a3300)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace(12e967da-ba1b-419e-ae17-80b2f60a3300)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace_12e967da-ba1b-419e-ae17-80b2f60a3300_0(1bab906f7e158e9c0844318fc556e97e47210810e4f55a9aeffd58b3ae2ae28d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" podUID="12e967da-ba1b-419e-ae17-80b2f60a3300" Dec 11 08:34:35 crc kubenswrapper[4992]: I1211 08:34:35.142442 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:35 crc kubenswrapper[4992]: I1211 08:34:35.143220 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:35 crc kubenswrapper[4992]: E1211 08:34:35.166683 4992 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace_12e967da-ba1b-419e-ae17-80b2f60a3300_0(3bcc8118c8ec738bed71df811c0f2aa2ea596aad9683bc3a7a9aeab03fbf4fc4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 08:34:35 crc kubenswrapper[4992]: E1211 08:34:35.166760 4992 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace_12e967da-ba1b-419e-ae17-80b2f60a3300_0(3bcc8118c8ec738bed71df811c0f2aa2ea596aad9683bc3a7a9aeab03fbf4fc4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:35 crc kubenswrapper[4992]: E1211 08:34:35.166789 4992 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace_12e967da-ba1b-419e-ae17-80b2f60a3300_0(3bcc8118c8ec738bed71df811c0f2aa2ea596aad9683bc3a7a9aeab03fbf4fc4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:35 crc kubenswrapper[4992]: E1211 08:34:35.166858 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace(12e967da-ba1b-419e-ae17-80b2f60a3300)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace(12e967da-ba1b-419e-ae17-80b2f60a3300)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_openshift-marketplace_12e967da-ba1b-419e-ae17-80b2f60a3300_0(3bcc8118c8ec738bed71df811c0f2aa2ea596aad9683bc3a7a9aeab03fbf4fc4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" podUID="12e967da-ba1b-419e-ae17-80b2f60a3300" Dec 11 08:34:44 crc kubenswrapper[4992]: I1211 08:34:44.096017 4992 scope.go:117] "RemoveContainer" containerID="88cd5d23fc1cf16747d24d55152252142e221cc14a1a4fb4bb157a484b76bd2c" Dec 11 08:34:45 crc kubenswrapper[4992]: I1211 08:34:45.213428 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/2.log" Dec 11 08:34:45 crc kubenswrapper[4992]: I1211 08:34:45.214756 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/1.log" Dec 11 08:34:45 crc kubenswrapper[4992]: I1211 08:34:45.214850 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lglcz" event={"ID":"5838adfc-502f-44ac-be33-14f964497c4f","Type":"ContainerStarted","Data":"23cd711c2cdca77befd41b9bfce820ff54684151b73482d21d7c2cdddd9e4fdf"} Dec 11 08:34:46 crc kubenswrapper[4992]: I1211 08:34:46.131351 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-v9hkf" Dec 11 08:34:50 crc kubenswrapper[4992]: I1211 08:34:50.094833 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:50 crc kubenswrapper[4992]: I1211 08:34:50.095860 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:50 crc kubenswrapper[4992]: I1211 08:34:50.312582 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr"] Dec 11 08:34:50 crc kubenswrapper[4992]: W1211 08:34:50.319359 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12e967da_ba1b_419e_ae17_80b2f60a3300.slice/crio-7a88c816b954bcb5294d630a8b1c6bb913860258cbb4302052dd9be2edceb1c9 WatchSource:0}: Error finding container 7a88c816b954bcb5294d630a8b1c6bb913860258cbb4302052dd9be2edceb1c9: Status 404 returned error can't find the container with id 7a88c816b954bcb5294d630a8b1c6bb913860258cbb4302052dd9be2edceb1c9 Dec 11 08:34:51 crc kubenswrapper[4992]: I1211 08:34:51.273920 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" event={"ID":"12e967da-ba1b-419e-ae17-80b2f60a3300","Type":"ContainerStarted","Data":"6ed0a6f10f1b356c8404009a81992c6de4081e16c4d0bec2da0955983da3ddab"} Dec 11 08:34:51 crc kubenswrapper[4992]: I1211 08:34:51.273981 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" event={"ID":"12e967da-ba1b-419e-ae17-80b2f60a3300","Type":"ContainerStarted","Data":"7a88c816b954bcb5294d630a8b1c6bb913860258cbb4302052dd9be2edceb1c9"} Dec 11 08:34:52 crc kubenswrapper[4992]: I1211 08:34:52.281507 4992 generic.go:334] "Generic (PLEG): container finished" podID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerID="6ed0a6f10f1b356c8404009a81992c6de4081e16c4d0bec2da0955983da3ddab" exitCode=0 Dec 11 08:34:52 crc kubenswrapper[4992]: I1211 08:34:52.281546 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" event={"ID":"12e967da-ba1b-419e-ae17-80b2f60a3300","Type":"ContainerDied","Data":"6ed0a6f10f1b356c8404009a81992c6de4081e16c4d0bec2da0955983da3ddab"} Dec 11 08:34:54 crc kubenswrapper[4992]: I1211 08:34:54.300154 4992 generic.go:334] "Generic (PLEG): container finished" podID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerID="df0c317a3eccb0e8ce68b9387e0e9f70c026e34bac07e42616976a8050e5c9fa" exitCode=0 Dec 11 08:34:54 crc kubenswrapper[4992]: I1211 08:34:54.300424 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" event={"ID":"12e967da-ba1b-419e-ae17-80b2f60a3300","Type":"ContainerDied","Data":"df0c317a3eccb0e8ce68b9387e0e9f70c026e34bac07e42616976a8050e5c9fa"} Dec 11 08:34:55 crc kubenswrapper[4992]: I1211 08:34:55.320902 4992 generic.go:334] "Generic (PLEG): container finished" podID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerID="30b36de45124a80d140df934d805dd089579ec7d80cb6043c5200caa514b9f09" exitCode=0 Dec 11 08:34:55 crc kubenswrapper[4992]: I1211 08:34:55.320996 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" event={"ID":"12e967da-ba1b-419e-ae17-80b2f60a3300","Type":"ContainerDied","Data":"30b36de45124a80d140df934d805dd089579ec7d80cb6043c5200caa514b9f09"} Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.365796 4992 scope.go:117] "RemoveContainer" containerID="04c3aedcf37f901f6ceb29917167a1cf94dc8f8aa9b3cb959c13d9ee2180f329" Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.574419 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.721511 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-bundle\") pod \"12e967da-ba1b-419e-ae17-80b2f60a3300\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.721580 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-util\") pod \"12e967da-ba1b-419e-ae17-80b2f60a3300\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.721729 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdwkq\" (UniqueName: \"kubernetes.io/projected/12e967da-ba1b-419e-ae17-80b2f60a3300-kube-api-access-kdwkq\") pod \"12e967da-ba1b-419e-ae17-80b2f60a3300\" (UID: \"12e967da-ba1b-419e-ae17-80b2f60a3300\") " Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.723270 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-bundle" (OuterVolumeSpecName: "bundle") pod "12e967da-ba1b-419e-ae17-80b2f60a3300" (UID: "12e967da-ba1b-419e-ae17-80b2f60a3300"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.729470 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e967da-ba1b-419e-ae17-80b2f60a3300-kube-api-access-kdwkq" (OuterVolumeSpecName: "kube-api-access-kdwkq") pod "12e967da-ba1b-419e-ae17-80b2f60a3300" (UID: "12e967da-ba1b-419e-ae17-80b2f60a3300"). InnerVolumeSpecName "kube-api-access-kdwkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.735797 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-util" (OuterVolumeSpecName: "util") pod "12e967da-ba1b-419e-ae17-80b2f60a3300" (UID: "12e967da-ba1b-419e-ae17-80b2f60a3300"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.823478 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdwkq\" (UniqueName: \"kubernetes.io/projected/12e967da-ba1b-419e-ae17-80b2f60a3300-kube-api-access-kdwkq\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.823517 4992 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:56 crc kubenswrapper[4992]: I1211 08:34:56.823526 4992 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/12e967da-ba1b-419e-ae17-80b2f60a3300-util\") on node \"crc\" DevicePath \"\"" Dec 11 08:34:57 crc kubenswrapper[4992]: I1211 08:34:57.334585 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lglcz_5838adfc-502f-44ac-be33-14f964497c4f/kube-multus/2.log" Dec 11 08:34:57 crc kubenswrapper[4992]: I1211 08:34:57.337205 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" event={"ID":"12e967da-ba1b-419e-ae17-80b2f60a3300","Type":"ContainerDied","Data":"7a88c816b954bcb5294d630a8b1c6bb913860258cbb4302052dd9be2edceb1c9"} Dec 11 08:34:57 crc kubenswrapper[4992]: I1211 08:34:57.337276 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr" Dec 11 08:34:57 crc kubenswrapper[4992]: I1211 08:34:57.337283 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a88c816b954bcb5294d630a8b1c6bb913860258cbb4302052dd9be2edceb1c9" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.784684 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-mxq6q"] Dec 11 08:35:00 crc kubenswrapper[4992]: E1211 08:35:00.785328 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerName="extract" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.785344 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerName="extract" Dec 11 08:35:00 crc kubenswrapper[4992]: E1211 08:35:00.785354 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerName="pull" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.785362 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerName="pull" Dec 11 08:35:00 crc kubenswrapper[4992]: E1211 08:35:00.785375 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerName="util" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.785382 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerName="util" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.785496 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e967da-ba1b-419e-ae17-80b2f60a3300" containerName="extract" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.786010 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-mxq6q" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.788674 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.788675 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4k5ch" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.789016 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.793383 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-mxq6q"] Dec 11 08:35:00 crc kubenswrapper[4992]: I1211 08:35:00.972153 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frpc4\" (UniqueName: \"kubernetes.io/projected/2dec2cea-664e-4421-8e41-8c15c02aa08f-kube-api-access-frpc4\") pod \"nmstate-operator-6769fb99d-mxq6q\" (UID: \"2dec2cea-664e-4421-8e41-8c15c02aa08f\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-mxq6q" Dec 11 08:35:01 crc kubenswrapper[4992]: I1211 08:35:01.073734 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frpc4\" (UniqueName: \"kubernetes.io/projected/2dec2cea-664e-4421-8e41-8c15c02aa08f-kube-api-access-frpc4\") pod \"nmstate-operator-6769fb99d-mxq6q\" (UID: \"2dec2cea-664e-4421-8e41-8c15c02aa08f\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-mxq6q" Dec 11 08:35:01 crc kubenswrapper[4992]: I1211 08:35:01.094569 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frpc4\" (UniqueName: \"kubernetes.io/projected/2dec2cea-664e-4421-8e41-8c15c02aa08f-kube-api-access-frpc4\") pod \"nmstate-operator-6769fb99d-mxq6q\" (UID: \"2dec2cea-664e-4421-8e41-8c15c02aa08f\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-mxq6q" Dec 11 08:35:01 crc kubenswrapper[4992]: I1211 08:35:01.103121 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-mxq6q" Dec 11 08:35:01 crc kubenswrapper[4992]: I1211 08:35:01.486330 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-mxq6q"] Dec 11 08:35:02 crc kubenswrapper[4992]: I1211 08:35:02.366392 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-mxq6q" event={"ID":"2dec2cea-664e-4421-8e41-8c15c02aa08f","Type":"ContainerStarted","Data":"736c21668508ed7bd5b464fc6305a58d9a305ed56def3ad6d73586b075511d40"} Dec 11 08:35:04 crc kubenswrapper[4992]: I1211 08:35:04.386234 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-mxq6q" event={"ID":"2dec2cea-664e-4421-8e41-8c15c02aa08f","Type":"ContainerStarted","Data":"48bfcc54e531cdeaa17ddd59e6a5d0e3afb7f023d21b3f76234ba19e2f3b1d8c"} Dec 11 08:35:04 crc kubenswrapper[4992]: I1211 08:35:04.412259 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-mxq6q" podStartSLOduration=2.470527563 podStartE2EDuration="4.412240617s" podCreationTimestamp="2025-12-11 08:35:00 +0000 UTC" firstStartedPulling="2025-12-11 08:35:01.498404399 +0000 UTC m=+725.757878325" lastFinishedPulling="2025-12-11 08:35:03.440117413 +0000 UTC m=+727.699591379" observedRunningTime="2025-12-11 08:35:04.410163195 +0000 UTC m=+728.669637131" watchObservedRunningTime="2025-12-11 08:35:04.412240617 +0000 UTC m=+728.671714553" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.378813 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.379163 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.412602 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm"] Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.414352 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.418816 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xf7x6" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.421620 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn"] Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.422735 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.429758 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba5caba0-f6b6-400d-ab83-b1079de7af46-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-t6ptn\" (UID: \"ba5caba0-f6b6-400d-ab83-b1079de7af46\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.429879 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x62d\" (UniqueName: \"kubernetes.io/projected/ba5caba0-f6b6-400d-ab83-b1079de7af46-kube-api-access-9x62d\") pod \"nmstate-webhook-f8fb84555-t6ptn\" (UID: \"ba5caba0-f6b6-400d-ab83-b1079de7af46\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.429914 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2l7\" (UniqueName: \"kubernetes.io/projected/a098ff29-c757-4eac-b38d-33f5d50e1aea-kube-api-access-pm2l7\") pod \"nmstate-metrics-7f7f7578db-6csbm\" (UID: \"a098ff29-c757-4eac-b38d-33f5d50e1aea\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.441507 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.445496 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm"] Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.451028 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn"] Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.457867 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sh9d8"] Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.458765 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.531884 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkp6k\" (UniqueName: \"kubernetes.io/projected/865e017c-606b-41e4-82f7-3e0520607d02-kube-api-access-dkp6k\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.531939 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba5caba0-f6b6-400d-ab83-b1079de7af46-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-t6ptn\" (UID: \"ba5caba0-f6b6-400d-ab83-b1079de7af46\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.531982 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/865e017c-606b-41e4-82f7-3e0520607d02-ovs-socket\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.532008 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/865e017c-606b-41e4-82f7-3e0520607d02-nmstate-lock\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.532026 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/865e017c-606b-41e4-82f7-3e0520607d02-dbus-socket\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.532047 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x62d\" (UniqueName: \"kubernetes.io/projected/ba5caba0-f6b6-400d-ab83-b1079de7af46-kube-api-access-9x62d\") pod \"nmstate-webhook-f8fb84555-t6ptn\" (UID: \"ba5caba0-f6b6-400d-ab83-b1079de7af46\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.532079 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2l7\" (UniqueName: \"kubernetes.io/projected/a098ff29-c757-4eac-b38d-33f5d50e1aea-kube-api-access-pm2l7\") pod \"nmstate-metrics-7f7f7578db-6csbm\" (UID: \"a098ff29-c757-4eac-b38d-33f5d50e1aea\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm" Dec 11 08:35:05 crc kubenswrapper[4992]: E1211 08:35:05.532375 4992 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 11 08:35:05 crc kubenswrapper[4992]: E1211 08:35:05.532459 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba5caba0-f6b6-400d-ab83-b1079de7af46-tls-key-pair podName:ba5caba0-f6b6-400d-ab83-b1079de7af46 nodeName:}" failed. No retries permitted until 2025-12-11 08:35:06.032439883 +0000 UTC m=+730.291913809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ba5caba0-f6b6-400d-ab83-b1079de7af46-tls-key-pair") pod "nmstate-webhook-f8fb84555-t6ptn" (UID: "ba5caba0-f6b6-400d-ab83-b1079de7af46") : secret "openshift-nmstate-webhook" not found Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.551433 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x62d\" (UniqueName: \"kubernetes.io/projected/ba5caba0-f6b6-400d-ab83-b1079de7af46-kube-api-access-9x62d\") pod \"nmstate-webhook-f8fb84555-t6ptn\" (UID: \"ba5caba0-f6b6-400d-ab83-b1079de7af46\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.554400 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2l7\" (UniqueName: \"kubernetes.io/projected/a098ff29-c757-4eac-b38d-33f5d50e1aea-kube-api-access-pm2l7\") pod \"nmstate-metrics-7f7f7578db-6csbm\" (UID: \"a098ff29-c757-4eac-b38d-33f5d50e1aea\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.565570 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28"] Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.566293 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.567763 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.568893 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2tj7h" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.569384 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.582684 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28"] Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632467 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/865e017c-606b-41e4-82f7-3e0520607d02-nmstate-lock\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632536 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/865e017c-606b-41e4-82f7-3e0520607d02-dbus-socket\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632583 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/65477465-ca8e-4379-bb0a-7940542990f7-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632625 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx85v\" (UniqueName: \"kubernetes.io/projected/65477465-ca8e-4379-bb0a-7940542990f7-kube-api-access-hx85v\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632652 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/865e017c-606b-41e4-82f7-3e0520607d02-nmstate-lock\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632697 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/65477465-ca8e-4379-bb0a-7940542990f7-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632763 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkp6k\" (UniqueName: \"kubernetes.io/projected/865e017c-606b-41e4-82f7-3e0520607d02-kube-api-access-dkp6k\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632870 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/865e017c-606b-41e4-82f7-3e0520607d02-ovs-socket\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632889 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/865e017c-606b-41e4-82f7-3e0520607d02-dbus-socket\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.632996 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/865e017c-606b-41e4-82f7-3e0520607d02-ovs-socket\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.647945 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkp6k\" (UniqueName: \"kubernetes.io/projected/865e017c-606b-41e4-82f7-3e0520607d02-kube-api-access-dkp6k\") pod \"nmstate-handler-sh9d8\" (UID: \"865e017c-606b-41e4-82f7-3e0520607d02\") " pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.829605 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.830011 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.831295 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/65477465-ca8e-4379-bb0a-7940542990f7-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.831382 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx85v\" (UniqueName: \"kubernetes.io/projected/65477465-ca8e-4379-bb0a-7940542990f7-kube-api-access-hx85v\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.831531 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/65477465-ca8e-4379-bb0a-7940542990f7-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:05 crc kubenswrapper[4992]: E1211 08:35:05.831775 4992 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 11 08:35:05 crc kubenswrapper[4992]: E1211 08:35:05.831865 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65477465-ca8e-4379-bb0a-7940542990f7-plugin-serving-cert podName:65477465-ca8e-4379-bb0a-7940542990f7 nodeName:}" failed. No retries permitted until 2025-12-11 08:35:06.331844509 +0000 UTC m=+730.591318435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/65477465-ca8e-4379-bb0a-7940542990f7-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-wkx28" (UID: "65477465-ca8e-4379-bb0a-7940542990f7") : secret "plugin-serving-cert" not found Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.834081 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/65477465-ca8e-4379-bb0a-7940542990f7-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.882538 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx85v\" (UniqueName: \"kubernetes.io/projected/65477465-ca8e-4379-bb0a-7940542990f7-kube-api-access-hx85v\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.913028 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67ff44877f-tdxj4"] Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.913893 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.925172 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67ff44877f-tdxj4"] Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.933174 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-trusted-ca-bundle\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.933217 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22gkg\" (UniqueName: \"kubernetes.io/projected/83a375cb-b56a-49cb-8b6c-7a51669e5685-kube-api-access-22gkg\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.933265 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83a375cb-b56a-49cb-8b6c-7a51669e5685-console-serving-cert\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.933339 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-console-config\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.933375 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-service-ca\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.933593 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-oauth-serving-cert\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:05 crc kubenswrapper[4992]: I1211 08:35:05.933728 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83a375cb-b56a-49cb-8b6c-7a51669e5685-console-oauth-config\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.034888 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-console-config\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.035225 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-service-ca\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.035252 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-oauth-serving-cert\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.035282 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83a375cb-b56a-49cb-8b6c-7a51669e5685-console-oauth-config\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.036193 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-oauth-serving-cert\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.036233 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-console-config\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.035308 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-trusted-ca-bundle\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.036285 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22gkg\" (UniqueName: \"kubernetes.io/projected/83a375cb-b56a-49cb-8b6c-7a51669e5685-kube-api-access-22gkg\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.036316 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83a375cb-b56a-49cb-8b6c-7a51669e5685-console-serving-cert\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.036345 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba5caba0-f6b6-400d-ab83-b1079de7af46-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-t6ptn\" (UID: \"ba5caba0-f6b6-400d-ab83-b1079de7af46\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.037301 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-trusted-ca-bundle\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.037308 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83a375cb-b56a-49cb-8b6c-7a51669e5685-service-ca\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.042781 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83a375cb-b56a-49cb-8b6c-7a51669e5685-console-serving-cert\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.047997 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83a375cb-b56a-49cb-8b6c-7a51669e5685-console-oauth-config\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.048263 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ba5caba0-f6b6-400d-ab83-b1079de7af46-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-t6ptn\" (UID: \"ba5caba0-f6b6-400d-ab83-b1079de7af46\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.053487 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22gkg\" (UniqueName: \"kubernetes.io/projected/83a375cb-b56a-49cb-8b6c-7a51669e5685-kube-api-access-22gkg\") pod \"console-67ff44877f-tdxj4\" (UID: \"83a375cb-b56a-49cb-8b6c-7a51669e5685\") " pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.109119 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm"] Dec 11 08:35:06 crc kubenswrapper[4992]: W1211 08:35:06.116005 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda098ff29_c757_4eac_b38d_33f5d50e1aea.slice/crio-d9c968fa22835403b2b2bafd3465c2ab65083705a52733c55112fb28182add39 WatchSource:0}: Error finding container d9c968fa22835403b2b2bafd3465c2ab65083705a52733c55112fb28182add39: Status 404 returned error can't find the container with id d9c968fa22835403b2b2bafd3465c2ab65083705a52733c55112fb28182add39 Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.236204 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.338757 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/65477465-ca8e-4379-bb0a-7940542990f7-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.342711 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/65477465-ca8e-4379-bb0a-7940542990f7-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-wkx28\" (UID: \"65477465-ca8e-4379-bb0a-7940542990f7\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.343370 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.397651 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sh9d8" event={"ID":"865e017c-606b-41e4-82f7-3e0520607d02","Type":"ContainerStarted","Data":"63e7f8cb662635981cf04223ba9c3cf60d7b913a132c25d473b0a63fb8195eeb"} Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.398963 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm" event={"ID":"a098ff29-c757-4eac-b38d-33f5d50e1aea","Type":"ContainerStarted","Data":"d9c968fa22835403b2b2bafd3465c2ab65083705a52733c55112fb28182add39"} Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.498863 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.518745 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn"] Dec 11 08:35:06 crc kubenswrapper[4992]: W1211 08:35:06.524033 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba5caba0_f6b6_400d_ab83_b1079de7af46.slice/crio-8a090d5a8e2afcd460e1124dd118476a9b6ea760dc332fd645b0f131ab7e5c7d WatchSource:0}: Error finding container 8a090d5a8e2afcd460e1124dd118476a9b6ea760dc332fd645b0f131ab7e5c7d: Status 404 returned error can't find the container with id 8a090d5a8e2afcd460e1124dd118476a9b6ea760dc332fd645b0f131ab7e5c7d Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.629768 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67ff44877f-tdxj4"] Dec 11 08:35:06 crc kubenswrapper[4992]: W1211 08:35:06.636902 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83a375cb_b56a_49cb_8b6c_7a51669e5685.slice/crio-454c5e513ae1a83f04cd3be565e5f23b155c79ac8e17779d2490d599995aa256 WatchSource:0}: Error finding container 454c5e513ae1a83f04cd3be565e5f23b155c79ac8e17779d2490d599995aa256: Status 404 returned error can't find the container with id 454c5e513ae1a83f04cd3be565e5f23b155c79ac8e17779d2490d599995aa256 Dec 11 08:35:06 crc kubenswrapper[4992]: I1211 08:35:06.726918 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28"] Dec 11 08:35:07 crc kubenswrapper[4992]: I1211 08:35:07.406204 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" event={"ID":"ba5caba0-f6b6-400d-ab83-b1079de7af46","Type":"ContainerStarted","Data":"8a090d5a8e2afcd460e1124dd118476a9b6ea760dc332fd645b0f131ab7e5c7d"} Dec 11 08:35:07 crc kubenswrapper[4992]: I1211 08:35:07.407300 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" event={"ID":"65477465-ca8e-4379-bb0a-7940542990f7","Type":"ContainerStarted","Data":"d34a3fff4c0c4e64bfe5fa0d8f5860856561e406e20247f98a33b8f589ebddde"} Dec 11 08:35:07 crc kubenswrapper[4992]: I1211 08:35:07.408482 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67ff44877f-tdxj4" event={"ID":"83a375cb-b56a-49cb-8b6c-7a51669e5685","Type":"ContainerStarted","Data":"7f89b50e5a3286b43db5849f9eea15e27b4c25ebcfe10713bdbd0b809f8aced9"} Dec 11 08:35:07 crc kubenswrapper[4992]: I1211 08:35:07.408614 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67ff44877f-tdxj4" event={"ID":"83a375cb-b56a-49cb-8b6c-7a51669e5685","Type":"ContainerStarted","Data":"454c5e513ae1a83f04cd3be565e5f23b155c79ac8e17779d2490d599995aa256"} Dec 11 08:35:07 crc kubenswrapper[4992]: I1211 08:35:07.432001 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67ff44877f-tdxj4" podStartSLOduration=2.431981596 podStartE2EDuration="2.431981596s" podCreationTimestamp="2025-12-11 08:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:35:07.425394054 +0000 UTC m=+731.684868000" watchObservedRunningTime="2025-12-11 08:35:07.431981596 +0000 UTC m=+731.691455522" Dec 11 08:35:09 crc kubenswrapper[4992]: I1211 08:35:09.418948 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sh9d8" event={"ID":"865e017c-606b-41e4-82f7-3e0520607d02","Type":"ContainerStarted","Data":"c41e5d0b126fabe9e2e6226ed81502519ccb4630d0d95959247a19abc7634552"} Dec 11 08:35:09 crc kubenswrapper[4992]: I1211 08:35:09.419866 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:09 crc kubenswrapper[4992]: I1211 08:35:09.421515 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm" event={"ID":"a098ff29-c757-4eac-b38d-33f5d50e1aea","Type":"ContainerStarted","Data":"339f855a3a14e8688e0894ef742d05b42ad7c7ce225c0d0b7a1c9eeb520dccd5"} Dec 11 08:35:09 crc kubenswrapper[4992]: I1211 08:35:09.422928 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" event={"ID":"ba5caba0-f6b6-400d-ab83-b1079de7af46","Type":"ContainerStarted","Data":"3d16bb0fffc45de4fd5d61cbdf47ded39de20f811fc2cd51da3a1e7a232ae168"} Dec 11 08:35:09 crc kubenswrapper[4992]: I1211 08:35:09.423033 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:09 crc kubenswrapper[4992]: I1211 08:35:09.426196 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" event={"ID":"65477465-ca8e-4379-bb0a-7940542990f7","Type":"ContainerStarted","Data":"830735bcc514324a682a74d3cda801276eb76056797357eb64558b346e5722cc"} Dec 11 08:35:09 crc kubenswrapper[4992]: I1211 08:35:09.441448 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sh9d8" podStartSLOduration=1.349052469 podStartE2EDuration="4.44143233s" podCreationTimestamp="2025-12-11 08:35:05 +0000 UTC" firstStartedPulling="2025-12-11 08:35:05.875459316 +0000 UTC m=+730.134933242" lastFinishedPulling="2025-12-11 08:35:08.967839177 +0000 UTC m=+733.227313103" observedRunningTime="2025-12-11 08:35:09.439542404 +0000 UTC m=+733.699016330" watchObservedRunningTime="2025-12-11 08:35:09.44143233 +0000 UTC m=+733.700906266" Dec 11 08:35:09 crc kubenswrapper[4992]: I1211 08:35:09.460160 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-wkx28" podStartSLOduration=2.249423423 podStartE2EDuration="4.460136313s" podCreationTimestamp="2025-12-11 08:35:05 +0000 UTC" firstStartedPulling="2025-12-11 08:35:06.740783264 +0000 UTC m=+731.000257190" lastFinishedPulling="2025-12-11 08:35:08.951496154 +0000 UTC m=+733.210970080" observedRunningTime="2025-12-11 08:35:09.454416582 +0000 UTC m=+733.713890508" watchObservedRunningTime="2025-12-11 08:35:09.460136313 +0000 UTC m=+733.719610249" Dec 11 08:35:09 crc kubenswrapper[4992]: I1211 08:35:09.478234 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" podStartSLOduration=2.054591228 podStartE2EDuration="4.478210869s" podCreationTimestamp="2025-12-11 08:35:05 +0000 UTC" firstStartedPulling="2025-12-11 08:35:06.526426506 +0000 UTC m=+730.785900432" lastFinishedPulling="2025-12-11 08:35:08.950046147 +0000 UTC m=+733.209520073" observedRunningTime="2025-12-11 08:35:09.476904396 +0000 UTC m=+733.736378322" watchObservedRunningTime="2025-12-11 08:35:09.478210869 +0000 UTC m=+733.737684795" Dec 11 08:35:12 crc kubenswrapper[4992]: I1211 08:35:12.443177 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm" event={"ID":"a098ff29-c757-4eac-b38d-33f5d50e1aea","Type":"ContainerStarted","Data":"d0bcc1fc88e69fd500f03a7e5dd7b96ecf1ecf4de23dd60308b3d25b591a8dee"} Dec 11 08:35:12 crc kubenswrapper[4992]: I1211 08:35:12.466109 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6csbm" podStartSLOduration=1.688261709 podStartE2EDuration="7.466089942s" podCreationTimestamp="2025-12-11 08:35:05 +0000 UTC" firstStartedPulling="2025-12-11 08:35:06.119370533 +0000 UTC m=+730.378844459" lastFinishedPulling="2025-12-11 08:35:11.897198766 +0000 UTC m=+736.156672692" observedRunningTime="2025-12-11 08:35:12.463043657 +0000 UTC m=+736.722517623" watchObservedRunningTime="2025-12-11 08:35:12.466089942 +0000 UTC m=+736.725563868" Dec 11 08:35:15 crc kubenswrapper[4992]: I1211 08:35:15.862605 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sh9d8" Dec 11 08:35:16 crc kubenswrapper[4992]: I1211 08:35:16.236935 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:16 crc kubenswrapper[4992]: I1211 08:35:16.237015 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:16 crc kubenswrapper[4992]: I1211 08:35:16.242647 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:16 crc kubenswrapper[4992]: I1211 08:35:16.479117 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67ff44877f-tdxj4" Dec 11 08:35:16 crc kubenswrapper[4992]: I1211 08:35:16.557146 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9lslc"] Dec 11 08:35:26 crc kubenswrapper[4992]: I1211 08:35:26.352033 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-t6ptn" Dec 11 08:35:35 crc kubenswrapper[4992]: I1211 08:35:35.380251 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:35:35 crc kubenswrapper[4992]: I1211 08:35:35.382172 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:35:38 crc kubenswrapper[4992]: I1211 08:35:38.143754 4992 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 08:35:41 crc kubenswrapper[4992]: I1211 08:35:41.602497 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9lslc" podUID="3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" containerName="console" containerID="cri-o://0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf" gracePeriod=15 Dec 11 08:35:41 crc kubenswrapper[4992]: I1211 08:35:41.943992 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9lslc_3c7d713b-b1c4-4254-9d99-fa7defb0fb2b/console/0.log" Dec 11 08:35:41 crc kubenswrapper[4992]: I1211 08:35:41.944063 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.043880 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-oauth-serving-cert\") pod \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.043945 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-trusted-ca-bundle\") pod \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.043995 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-oauth-config\") pod \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.044044 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-serving-cert\") pod \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.044094 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-service-ca\") pod \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.044132 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-config\") pod \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.044223 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb6kx\" (UniqueName: \"kubernetes.io/projected/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-kube-api-access-vb6kx\") pod \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\" (UID: \"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b\") " Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.044960 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" (UID: "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.045022 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" (UID: "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.045911 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-service-ca" (OuterVolumeSpecName: "service-ca") pod "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" (UID: "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.046206 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-config" (OuterVolumeSpecName: "console-config") pod "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" (UID: "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.049063 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn"] Dec 11 08:35:42 crc kubenswrapper[4992]: E1211 08:35:42.049333 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" containerName="console" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.049351 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" containerName="console" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.050051 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" containerName="console" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.051348 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.052181 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-kube-api-access-vb6kx" (OuterVolumeSpecName: "kube-api-access-vb6kx") pod "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" (UID: "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b"). InnerVolumeSpecName "kube-api-access-vb6kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.053262 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" (UID: "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.053586 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.053868 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" (UID: "3c7d713b-b1c4-4254-9d99-fa7defb0fb2b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.068932 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn"] Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145532 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145681 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145722 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tsp\" (UniqueName: \"kubernetes.io/projected/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-kube-api-access-l5tsp\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145849 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb6kx\" (UniqueName: \"kubernetes.io/projected/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-kube-api-access-vb6kx\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145871 4992 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145884 4992 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145896 4992 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145909 4992 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145923 4992 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.145935 4992 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.246465 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.246541 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.246570 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tsp\" (UniqueName: \"kubernetes.io/projected/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-kube-api-access-l5tsp\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.247214 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.247239 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.263805 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tsp\" (UniqueName: \"kubernetes.io/projected/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-kube-api-access-l5tsp\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.395097 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.644087 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9lslc_3c7d713b-b1c4-4254-9d99-fa7defb0fb2b/console/0.log" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.644163 4992 generic.go:334] "Generic (PLEG): container finished" podID="3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" containerID="0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf" exitCode=2 Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.644214 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lslc" event={"ID":"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b","Type":"ContainerDied","Data":"0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf"} Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.644267 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lslc" event={"ID":"3c7d713b-b1c4-4254-9d99-fa7defb0fb2b","Type":"ContainerDied","Data":"5ba224a85f804334e99daf5ac98f1da15a156793a10f3fa62dfd9bd5a6e37668"} Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.644324 4992 scope.go:117] "RemoveContainer" containerID="0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.644372 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lslc" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.682391 4992 scope.go:117] "RemoveContainer" containerID="0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf" Dec 11 08:35:42 crc kubenswrapper[4992]: E1211 08:35:42.683436 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf\": container with ID starting with 0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf not found: ID does not exist" containerID="0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.683918 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf"} err="failed to get container status \"0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf\": rpc error: code = NotFound desc = could not find container \"0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf\": container with ID starting with 0e49e84fddf1cb078019c821a17ef6fe803faa826f3751650f2c165b85761ecf not found: ID does not exist" Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.685047 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9lslc"] Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.696058 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9lslc"] Dec 11 08:35:42 crc kubenswrapper[4992]: I1211 08:35:42.712513 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn"] Dec 11 08:35:43 crc kubenswrapper[4992]: I1211 08:35:43.659082 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" event={"ID":"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe","Type":"ContainerStarted","Data":"4b9ed2e0d0ff93b0b113af3bb0649ee19f7bd37f78ba4999101b6047a8d747b0"} Dec 11 08:35:43 crc kubenswrapper[4992]: I1211 08:35:43.659169 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" event={"ID":"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe","Type":"ContainerStarted","Data":"b9ba26d70088bca28cb6f128457dee47c0e9c5d8bc5e0b62bc71041ec14e2263"} Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.107859 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7d713b-b1c4-4254-9d99-fa7defb0fb2b" path="/var/lib/kubelet/pods/3c7d713b-b1c4-4254-9d99-fa7defb0fb2b/volumes" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.415148 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vmxdn"] Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.421901 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.429358 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmxdn"] Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.481248 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-catalog-content\") pod \"redhat-operators-vmxdn\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.481326 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-utilities\") pod \"redhat-operators-vmxdn\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.481352 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt4p2\" (UniqueName: \"kubernetes.io/projected/212efd10-5dff-4829-8d4e-993420339fea-kube-api-access-vt4p2\") pod \"redhat-operators-vmxdn\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.583212 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-catalog-content\") pod \"redhat-operators-vmxdn\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.583281 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-utilities\") pod \"redhat-operators-vmxdn\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.583307 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt4p2\" (UniqueName: \"kubernetes.io/projected/212efd10-5dff-4829-8d4e-993420339fea-kube-api-access-vt4p2\") pod \"redhat-operators-vmxdn\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.583818 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-catalog-content\") pod \"redhat-operators-vmxdn\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.583878 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-utilities\") pod \"redhat-operators-vmxdn\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.607001 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt4p2\" (UniqueName: \"kubernetes.io/projected/212efd10-5dff-4829-8d4e-993420339fea-kube-api-access-vt4p2\") pod \"redhat-operators-vmxdn\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.665699 4992 generic.go:334] "Generic (PLEG): container finished" podID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerID="4b9ed2e0d0ff93b0b113af3bb0649ee19f7bd37f78ba4999101b6047a8d747b0" exitCode=0 Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.665765 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" event={"ID":"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe","Type":"ContainerDied","Data":"4b9ed2e0d0ff93b0b113af3bb0649ee19f7bd37f78ba4999101b6047a8d747b0"} Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.746560 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:44 crc kubenswrapper[4992]: I1211 08:35:44.962364 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vmxdn"] Dec 11 08:35:45 crc kubenswrapper[4992]: E1211 08:35:45.236713 4992 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod212efd10_5dff_4829_8d4e_993420339fea.slice/crio-conmon-5e94982441dd66f1080e9c0edfa1c72c50a1d413d6d82e8f8d82e12c9e542f14.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod212efd10_5dff_4829_8d4e_993420339fea.slice/crio-5e94982441dd66f1080e9c0edfa1c72c50a1d413d6d82e8f8d82e12c9e542f14.scope\": RecentStats: unable to find data in memory cache]" Dec 11 08:35:45 crc kubenswrapper[4992]: I1211 08:35:45.683435 4992 generic.go:334] "Generic (PLEG): container finished" podID="212efd10-5dff-4829-8d4e-993420339fea" containerID="5e94982441dd66f1080e9c0edfa1c72c50a1d413d6d82e8f8d82e12c9e542f14" exitCode=0 Dec 11 08:35:45 crc kubenswrapper[4992]: I1211 08:35:45.683487 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmxdn" event={"ID":"212efd10-5dff-4829-8d4e-993420339fea","Type":"ContainerDied","Data":"5e94982441dd66f1080e9c0edfa1c72c50a1d413d6d82e8f8d82e12c9e542f14"} Dec 11 08:35:45 crc kubenswrapper[4992]: I1211 08:35:45.683520 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmxdn" event={"ID":"212efd10-5dff-4829-8d4e-993420339fea","Type":"ContainerStarted","Data":"b00aa5f537426810853fbd25550e0e747072514bb22004c3489834f5f3d2f402"} Dec 11 08:35:46 crc kubenswrapper[4992]: I1211 08:35:46.693312 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmxdn" event={"ID":"212efd10-5dff-4829-8d4e-993420339fea","Type":"ContainerStarted","Data":"a5e835198add22b27995f09b98abf78c000f2b18f4f009a60a692867aab65c39"} Dec 11 08:35:46 crc kubenswrapper[4992]: I1211 08:35:46.695276 4992 generic.go:334] "Generic (PLEG): container finished" podID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerID="692c375081192e40b90475c53b46b09de7d834854f5f77ccf0811b1e7b87ea8f" exitCode=0 Dec 11 08:35:46 crc kubenswrapper[4992]: I1211 08:35:46.695391 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" event={"ID":"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe","Type":"ContainerDied","Data":"692c375081192e40b90475c53b46b09de7d834854f5f77ccf0811b1e7b87ea8f"} Dec 11 08:35:47 crc kubenswrapper[4992]: I1211 08:35:47.705969 4992 generic.go:334] "Generic (PLEG): container finished" podID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerID="bdbfd26789fa80116c4b795b6ff5bd5b7fee62c538d1f98455116176d2819a19" exitCode=0 Dec 11 08:35:47 crc kubenswrapper[4992]: I1211 08:35:47.706061 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" event={"ID":"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe","Type":"ContainerDied","Data":"bdbfd26789fa80116c4b795b6ff5bd5b7fee62c538d1f98455116176d2819a19"} Dec 11 08:35:47 crc kubenswrapper[4992]: I1211 08:35:47.709129 4992 generic.go:334] "Generic (PLEG): container finished" podID="212efd10-5dff-4829-8d4e-993420339fea" containerID="a5e835198add22b27995f09b98abf78c000f2b18f4f009a60a692867aab65c39" exitCode=0 Dec 11 08:35:47 crc kubenswrapper[4992]: I1211 08:35:47.709183 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmxdn" event={"ID":"212efd10-5dff-4829-8d4e-993420339fea","Type":"ContainerDied","Data":"a5e835198add22b27995f09b98abf78c000f2b18f4f009a60a692867aab65c39"} Dec 11 08:35:48 crc kubenswrapper[4992]: I1211 08:35:48.956400 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.156242 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-util\") pod \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.156370 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5tsp\" (UniqueName: \"kubernetes.io/projected/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-kube-api-access-l5tsp\") pod \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.156472 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-bundle\") pod \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\" (UID: \"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe\") " Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.158012 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-bundle" (OuterVolumeSpecName: "bundle") pod "9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" (UID: "9776ba1b-3de0-4a08-b43d-f6c5f7487ffe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.163905 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-kube-api-access-l5tsp" (OuterVolumeSpecName: "kube-api-access-l5tsp") pod "9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" (UID: "9776ba1b-3de0-4a08-b43d-f6c5f7487ffe"). InnerVolumeSpecName "kube-api-access-l5tsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.174111 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-util" (OuterVolumeSpecName: "util") pod "9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" (UID: "9776ba1b-3de0-4a08-b43d-f6c5f7487ffe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.258466 4992 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.258513 4992 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-util\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.258535 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5tsp\" (UniqueName: \"kubernetes.io/projected/9776ba1b-3de0-4a08-b43d-f6c5f7487ffe-kube-api-access-l5tsp\") on node \"crc\" DevicePath \"\"" Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.726423 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" event={"ID":"9776ba1b-3de0-4a08-b43d-f6c5f7487ffe","Type":"ContainerDied","Data":"b9ba26d70088bca28cb6f128457dee47c0e9c5d8bc5e0b62bc71041ec14e2263"} Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.727252 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9ba26d70088bca28cb6f128457dee47c0e9c5d8bc5e0b62bc71041ec14e2263" Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.727089 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn" Dec 11 08:35:49 crc kubenswrapper[4992]: I1211 08:35:49.729338 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmxdn" event={"ID":"212efd10-5dff-4829-8d4e-993420339fea","Type":"ContainerStarted","Data":"57da8490713d9365834f558f17708c22f68e3be6c10f29ee91060aa265387ba3"} Dec 11 08:35:50 crc kubenswrapper[4992]: I1211 08:35:50.007750 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vmxdn" podStartSLOduration=3.367940709 podStartE2EDuration="6.007722515s" podCreationTimestamp="2025-12-11 08:35:44 +0000 UTC" firstStartedPulling="2025-12-11 08:35:45.685074152 +0000 UTC m=+769.944548078" lastFinishedPulling="2025-12-11 08:35:48.324855958 +0000 UTC m=+772.584329884" observedRunningTime="2025-12-11 08:35:49.76023515 +0000 UTC m=+774.019709116" watchObservedRunningTime="2025-12-11 08:35:50.007722515 +0000 UTC m=+774.267196441" Dec 11 08:35:54 crc kubenswrapper[4992]: I1211 08:35:54.747192 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:54 crc kubenswrapper[4992]: I1211 08:35:54.747571 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:35:55 crc kubenswrapper[4992]: I1211 08:35:55.814354 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vmxdn" podUID="212efd10-5dff-4829-8d4e-993420339fea" containerName="registry-server" probeResult="failure" output=< Dec 11 08:35:55 crc kubenswrapper[4992]: timeout: failed to connect service ":50051" within 1s Dec 11 08:35:55 crc kubenswrapper[4992]: > Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.378727 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf"] Dec 11 08:35:57 crc kubenswrapper[4992]: E1211 08:35:57.379161 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerName="extract" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.379173 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerName="extract" Dec 11 08:35:57 crc kubenswrapper[4992]: E1211 08:35:57.379182 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerName="util" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.379188 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerName="util" Dec 11 08:35:57 crc kubenswrapper[4992]: E1211 08:35:57.379203 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerName="pull" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.379209 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerName="pull" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.379296 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="9776ba1b-3de0-4a08-b43d-f6c5f7487ffe" containerName="extract" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.379712 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.401611 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.401998 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.402072 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.403524 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tw4lf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.406791 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.414977 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf"] Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.465827 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48361753-e5d3-4311-b9e0-78de22981923-apiservice-cert\") pod \"metallb-operator-controller-manager-7c6f79466f-zkrkf\" (UID: \"48361753-e5d3-4311-b9e0-78de22981923\") " pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.465875 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbzj5\" (UniqueName: \"kubernetes.io/projected/48361753-e5d3-4311-b9e0-78de22981923-kube-api-access-vbzj5\") pod \"metallb-operator-controller-manager-7c6f79466f-zkrkf\" (UID: \"48361753-e5d3-4311-b9e0-78de22981923\") " pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.465909 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48361753-e5d3-4311-b9e0-78de22981923-webhook-cert\") pod \"metallb-operator-controller-manager-7c6f79466f-zkrkf\" (UID: \"48361753-e5d3-4311-b9e0-78de22981923\") " pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.567751 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48361753-e5d3-4311-b9e0-78de22981923-webhook-cert\") pod \"metallb-operator-controller-manager-7c6f79466f-zkrkf\" (UID: \"48361753-e5d3-4311-b9e0-78de22981923\") " pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.568342 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48361753-e5d3-4311-b9e0-78de22981923-apiservice-cert\") pod \"metallb-operator-controller-manager-7c6f79466f-zkrkf\" (UID: \"48361753-e5d3-4311-b9e0-78de22981923\") " pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.569093 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbzj5\" (UniqueName: \"kubernetes.io/projected/48361753-e5d3-4311-b9e0-78de22981923-kube-api-access-vbzj5\") pod \"metallb-operator-controller-manager-7c6f79466f-zkrkf\" (UID: \"48361753-e5d3-4311-b9e0-78de22981923\") " pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.575007 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48361753-e5d3-4311-b9e0-78de22981923-apiservice-cert\") pod \"metallb-operator-controller-manager-7c6f79466f-zkrkf\" (UID: \"48361753-e5d3-4311-b9e0-78de22981923\") " pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.587972 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbzj5\" (UniqueName: \"kubernetes.io/projected/48361753-e5d3-4311-b9e0-78de22981923-kube-api-access-vbzj5\") pod \"metallb-operator-controller-manager-7c6f79466f-zkrkf\" (UID: \"48361753-e5d3-4311-b9e0-78de22981923\") " pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.592113 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48361753-e5d3-4311-b9e0-78de22981923-webhook-cert\") pod \"metallb-operator-controller-manager-7c6f79466f-zkrkf\" (UID: \"48361753-e5d3-4311-b9e0-78de22981923\") " pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.723374 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.866557 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr"] Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.867306 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.876271 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.876735 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.876802 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hqztg" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.905734 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr"] Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.978457 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19d7371c-f87c-44c9-868e-636a222d606f-webhook-cert\") pod \"metallb-operator-webhook-server-7d64577cd-5nznr\" (UID: \"19d7371c-f87c-44c9-868e-636a222d606f\") " pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.978522 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnsrx\" (UniqueName: \"kubernetes.io/projected/19d7371c-f87c-44c9-868e-636a222d606f-kube-api-access-cnsrx\") pod \"metallb-operator-webhook-server-7d64577cd-5nznr\" (UID: \"19d7371c-f87c-44c9-868e-636a222d606f\") " pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:57 crc kubenswrapper[4992]: I1211 08:35:57.978859 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19d7371c-f87c-44c9-868e-636a222d606f-apiservice-cert\") pod \"metallb-operator-webhook-server-7d64577cd-5nznr\" (UID: \"19d7371c-f87c-44c9-868e-636a222d606f\") " pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.016851 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf"] Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.080099 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19d7371c-f87c-44c9-868e-636a222d606f-apiservice-cert\") pod \"metallb-operator-webhook-server-7d64577cd-5nznr\" (UID: \"19d7371c-f87c-44c9-868e-636a222d606f\") " pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.080184 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19d7371c-f87c-44c9-868e-636a222d606f-webhook-cert\") pod \"metallb-operator-webhook-server-7d64577cd-5nznr\" (UID: \"19d7371c-f87c-44c9-868e-636a222d606f\") " pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.080243 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnsrx\" (UniqueName: \"kubernetes.io/projected/19d7371c-f87c-44c9-868e-636a222d606f-kube-api-access-cnsrx\") pod \"metallb-operator-webhook-server-7d64577cd-5nznr\" (UID: \"19d7371c-f87c-44c9-868e-636a222d606f\") " pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.086035 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/19d7371c-f87c-44c9-868e-636a222d606f-apiservice-cert\") pod \"metallb-operator-webhook-server-7d64577cd-5nznr\" (UID: \"19d7371c-f87c-44c9-868e-636a222d606f\") " pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.086313 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/19d7371c-f87c-44c9-868e-636a222d606f-webhook-cert\") pod \"metallb-operator-webhook-server-7d64577cd-5nznr\" (UID: \"19d7371c-f87c-44c9-868e-636a222d606f\") " pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.097145 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnsrx\" (UniqueName: \"kubernetes.io/projected/19d7371c-f87c-44c9-868e-636a222d606f-kube-api-access-cnsrx\") pod \"metallb-operator-webhook-server-7d64577cd-5nznr\" (UID: \"19d7371c-f87c-44c9-868e-636a222d606f\") " pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.231256 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.511305 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr"] Dec 11 08:35:58 crc kubenswrapper[4992]: W1211 08:35:58.519610 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19d7371c_f87c_44c9_868e_636a222d606f.slice/crio-610a5f388fd1543273727d5e64d5cc063af7e2b947f03bdb26028734a2e83256 WatchSource:0}: Error finding container 610a5f388fd1543273727d5e64d5cc063af7e2b947f03bdb26028734a2e83256: Status 404 returned error can't find the container with id 610a5f388fd1543273727d5e64d5cc063af7e2b947f03bdb26028734a2e83256 Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.787090 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" event={"ID":"19d7371c-f87c-44c9-868e-636a222d606f","Type":"ContainerStarted","Data":"610a5f388fd1543273727d5e64d5cc063af7e2b947f03bdb26028734a2e83256"} Dec 11 08:35:58 crc kubenswrapper[4992]: I1211 08:35:58.789233 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" event={"ID":"48361753-e5d3-4311-b9e0-78de22981923","Type":"ContainerStarted","Data":"f458bdfd5a851057c1dd9233d3a5caa1955de139b4e77cde090aa29dca0c5f09"} Dec 11 08:36:02 crc kubenswrapper[4992]: I1211 08:36:02.817759 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" event={"ID":"48361753-e5d3-4311-b9e0-78de22981923","Type":"ContainerStarted","Data":"712bff7ad2893c78f42c3561bdfda3e567812c7545d07b3709c43d9da01b8933"} Dec 11 08:36:03 crc kubenswrapper[4992]: I1211 08:36:03.824459 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:36:03 crc kubenswrapper[4992]: I1211 08:36:03.848296 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" podStartSLOduration=2.467690071 podStartE2EDuration="6.848278454s" podCreationTimestamp="2025-12-11 08:35:57 +0000 UTC" firstStartedPulling="2025-12-11 08:35:58.034206965 +0000 UTC m=+782.293680891" lastFinishedPulling="2025-12-11 08:36:02.414795348 +0000 UTC m=+786.674269274" observedRunningTime="2025-12-11 08:36:03.843232849 +0000 UTC m=+788.102706795" watchObservedRunningTime="2025-12-11 08:36:03.848278454 +0000 UTC m=+788.107752380" Dec 11 08:36:04 crc kubenswrapper[4992]: I1211 08:36:04.786354 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:36:04 crc kubenswrapper[4992]: I1211 08:36:04.823348 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:36:05 crc kubenswrapper[4992]: I1211 08:36:05.016389 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmxdn"] Dec 11 08:36:05 crc kubenswrapper[4992]: I1211 08:36:05.378803 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:36:05 crc kubenswrapper[4992]: I1211 08:36:05.378868 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:36:05 crc kubenswrapper[4992]: I1211 08:36:05.378917 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:36:05 crc kubenswrapper[4992]: I1211 08:36:05.379432 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60689b85e9d0e4eef61ab75310d16d21a29edde0bcacd67f8fb3fabf7eaa5ca7"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 08:36:05 crc kubenswrapper[4992]: I1211 08:36:05.379496 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://60689b85e9d0e4eef61ab75310d16d21a29edde0bcacd67f8fb3fabf7eaa5ca7" gracePeriod=600 Dec 11 08:36:05 crc kubenswrapper[4992]: E1211 08:36:05.508347 4992 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa42ae65_5fda_421e_b27a_6d8a0b2defb3.slice/crio-60689b85e9d0e4eef61ab75310d16d21a29edde0bcacd67f8fb3fabf7eaa5ca7.scope\": RecentStats: unable to find data in memory cache]" Dec 11 08:36:05 crc kubenswrapper[4992]: I1211 08:36:05.836190 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vmxdn" podUID="212efd10-5dff-4829-8d4e-993420339fea" containerName="registry-server" containerID="cri-o://57da8490713d9365834f558f17708c22f68e3be6c10f29ee91060aa265387ba3" gracePeriod=2 Dec 11 08:36:07 crc kubenswrapper[4992]: I1211 08:36:07.857148 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="60689b85e9d0e4eef61ab75310d16d21a29edde0bcacd67f8fb3fabf7eaa5ca7" exitCode=0 Dec 11 08:36:07 crc kubenswrapper[4992]: I1211 08:36:07.857212 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"60689b85e9d0e4eef61ab75310d16d21a29edde0bcacd67f8fb3fabf7eaa5ca7"} Dec 11 08:36:07 crc kubenswrapper[4992]: I1211 08:36:07.857566 4992 scope.go:117] "RemoveContainer" containerID="c7dc8fd690f6db0535b70af4d93802ec848135dcfde017e2a96b74005cc0d3f8" Dec 11 08:36:07 crc kubenswrapper[4992]: I1211 08:36:07.861587 4992 generic.go:334] "Generic (PLEG): container finished" podID="212efd10-5dff-4829-8d4e-993420339fea" containerID="57da8490713d9365834f558f17708c22f68e3be6c10f29ee91060aa265387ba3" exitCode=0 Dec 11 08:36:07 crc kubenswrapper[4992]: I1211 08:36:07.861675 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmxdn" event={"ID":"212efd10-5dff-4829-8d4e-993420339fea","Type":"ContainerDied","Data":"57da8490713d9365834f558f17708c22f68e3be6c10f29ee91060aa265387ba3"} Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.072353 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.214106 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt4p2\" (UniqueName: \"kubernetes.io/projected/212efd10-5dff-4829-8d4e-993420339fea-kube-api-access-vt4p2\") pod \"212efd10-5dff-4829-8d4e-993420339fea\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.214208 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-utilities\") pod \"212efd10-5dff-4829-8d4e-993420339fea\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.214268 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-catalog-content\") pod \"212efd10-5dff-4829-8d4e-993420339fea\" (UID: \"212efd10-5dff-4829-8d4e-993420339fea\") " Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.217286 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-utilities" (OuterVolumeSpecName: "utilities") pod "212efd10-5dff-4829-8d4e-993420339fea" (UID: "212efd10-5dff-4829-8d4e-993420339fea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.233214 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/212efd10-5dff-4829-8d4e-993420339fea-kube-api-access-vt4p2" (OuterVolumeSpecName: "kube-api-access-vt4p2") pod "212efd10-5dff-4829-8d4e-993420339fea" (UID: "212efd10-5dff-4829-8d4e-993420339fea"). InnerVolumeSpecName "kube-api-access-vt4p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.315356 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt4p2\" (UniqueName: \"kubernetes.io/projected/212efd10-5dff-4829-8d4e-993420339fea-kube-api-access-vt4p2\") on node \"crc\" DevicePath \"\"" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.315405 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.323611 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "212efd10-5dff-4829-8d4e-993420339fea" (UID: "212efd10-5dff-4829-8d4e-993420339fea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.416841 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/212efd10-5dff-4829-8d4e-993420339fea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.875729 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vmxdn" event={"ID":"212efd10-5dff-4829-8d4e-993420339fea","Type":"ContainerDied","Data":"b00aa5f537426810853fbd25550e0e747072514bb22004c3489834f5f3d2f402"} Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.875785 4992 scope.go:117] "RemoveContainer" containerID="57da8490713d9365834f558f17708c22f68e3be6c10f29ee91060aa265387ba3" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.875850 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vmxdn" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.905844 4992 scope.go:117] "RemoveContainer" containerID="a5e835198add22b27995f09b98abf78c000f2b18f4f009a60a692867aab65c39" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.920909 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vmxdn"] Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.923953 4992 scope.go:117] "RemoveContainer" containerID="5e94982441dd66f1080e9c0edfa1c72c50a1d413d6d82e8f8d82e12c9e542f14" Dec 11 08:36:08 crc kubenswrapper[4992]: I1211 08:36:08.925510 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vmxdn"] Dec 11 08:36:09 crc kubenswrapper[4992]: I1211 08:36:09.884430 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"052d1b39952568f7c7dadc00d816b97c8f69c2e12d851ed0f8503ebf05896a23"} Dec 11 08:36:10 crc kubenswrapper[4992]: I1211 08:36:10.102933 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="212efd10-5dff-4829-8d4e-993420339fea" path="/var/lib/kubelet/pods/212efd10-5dff-4829-8d4e-993420339fea/volumes" Dec 11 08:36:14 crc kubenswrapper[4992]: I1211 08:36:14.919265 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" event={"ID":"19d7371c-f87c-44c9-868e-636a222d606f","Type":"ContainerStarted","Data":"98317bcade3ed5c427e962a980f49300ff833063ed9ad7c9ea6df9f4e3e3ac08"} Dec 11 08:36:14 crc kubenswrapper[4992]: I1211 08:36:14.919859 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:36:28 crc kubenswrapper[4992]: I1211 08:36:28.238417 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" Dec 11 08:36:28 crc kubenswrapper[4992]: I1211 08:36:28.272046 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7d64577cd-5nznr" podStartSLOduration=15.980610986 podStartE2EDuration="31.272023549s" podCreationTimestamp="2025-12-11 08:35:57 +0000 UTC" firstStartedPulling="2025-12-11 08:35:58.522005889 +0000 UTC m=+782.781479815" lastFinishedPulling="2025-12-11 08:36:13.813418452 +0000 UTC m=+798.072892378" observedRunningTime="2025-12-11 08:36:14.968232263 +0000 UTC m=+799.227706209" watchObservedRunningTime="2025-12-11 08:36:28.272023549 +0000 UTC m=+812.531497485" Dec 11 08:36:37 crc kubenswrapper[4992]: I1211 08:36:37.726341 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7c6f79466f-zkrkf" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.502989 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-splw9"] Dec 11 08:36:38 crc kubenswrapper[4992]: E1211 08:36:38.503716 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212efd10-5dff-4829-8d4e-993420339fea" containerName="extract-utilities" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.503740 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="212efd10-5dff-4829-8d4e-993420339fea" containerName="extract-utilities" Dec 11 08:36:38 crc kubenswrapper[4992]: E1211 08:36:38.503759 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212efd10-5dff-4829-8d4e-993420339fea" containerName="extract-content" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.503767 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="212efd10-5dff-4829-8d4e-993420339fea" containerName="extract-content" Dec 11 08:36:38 crc kubenswrapper[4992]: E1211 08:36:38.503781 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="212efd10-5dff-4829-8d4e-993420339fea" containerName="registry-server" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.503793 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="212efd10-5dff-4829-8d4e-993420339fea" containerName="registry-server" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.503942 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="212efd10-5dff-4829-8d4e-993420339fea" containerName="registry-server" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.506516 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.507011 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l"] Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.508163 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.512172 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.512287 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.512361 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.513020 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-kc24g" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.518033 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l"] Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.591114 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lfvx8"] Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.592214 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.594970 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.594970 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.595148 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.595115 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zwrvj" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.609497 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-g47kr"] Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.620560 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.623040 4992 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.640899 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-metrics-certs\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.640944 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.640966 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2jb\" (UniqueName: \"kubernetes.io/projected/3cb24660-a51b-4701-a0e2-f4edc25d0960-kube-api-access-2w2jb\") pod \"frr-k8s-webhook-server-7784b6fcf-wjv4l\" (UID: \"3cb24660-a51b-4701-a0e2-f4edc25d0960\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.640991 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-frr-conf\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.641006 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-reloader\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.641235 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34a79bdc-5774-468d-9136-9e03be822975-metrics-certs\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.641294 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/07a93a81-2773-49bc-a345-528d2d52dbd6-metallb-excludel2\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.641314 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3cb24660-a51b-4701-a0e2-f4edc25d0960-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wjv4l\" (UID: \"3cb24660-a51b-4701-a0e2-f4edc25d0960\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.641335 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/07a93a81-2773-49bc-a345-528d2d52dbd6-kube-api-access-lvzcn\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.641354 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-frr-sockets\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.641685 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/34a79bdc-5774-468d-9136-9e03be822975-frr-startup\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.641858 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-metrics\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.641945 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78hx9\" (UniqueName: \"kubernetes.io/projected/34a79bdc-5774-468d-9136-9e03be822975-kube-api-access-78hx9\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.643107 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-g47kr"] Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743172 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-reloader\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743256 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4g5h\" (UniqueName: \"kubernetes.io/projected/8b4f416d-3812-4dc9-8fa4-5667d5f2339b-kube-api-access-q4g5h\") pod \"controller-5bddd4b946-g47kr\" (UID: \"8b4f416d-3812-4dc9-8fa4-5667d5f2339b\") " pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743293 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34a79bdc-5774-468d-9136-9e03be822975-metrics-certs\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743317 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4f416d-3812-4dc9-8fa4-5667d5f2339b-cert\") pod \"controller-5bddd4b946-g47kr\" (UID: \"8b4f416d-3812-4dc9-8fa4-5667d5f2339b\") " pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743350 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/07a93a81-2773-49bc-a345-528d2d52dbd6-metallb-excludel2\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743372 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3cb24660-a51b-4701-a0e2-f4edc25d0960-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wjv4l\" (UID: \"3cb24660-a51b-4701-a0e2-f4edc25d0960\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743391 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/07a93a81-2773-49bc-a345-528d2d52dbd6-kube-api-access-lvzcn\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743408 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-frr-sockets\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743428 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/34a79bdc-5774-468d-9136-9e03be822975-frr-startup\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743463 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-metrics\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743482 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78hx9\" (UniqueName: \"kubernetes.io/projected/34a79bdc-5774-468d-9136-9e03be822975-kube-api-access-78hx9\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743521 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-metrics-certs\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743540 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743564 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w2jb\" (UniqueName: \"kubernetes.io/projected/3cb24660-a51b-4701-a0e2-f4edc25d0960-kube-api-access-2w2jb\") pod \"frr-k8s-webhook-server-7784b6fcf-wjv4l\" (UID: \"3cb24660-a51b-4701-a0e2-f4edc25d0960\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743593 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b4f416d-3812-4dc9-8fa4-5667d5f2339b-metrics-certs\") pod \"controller-5bddd4b946-g47kr\" (UID: \"8b4f416d-3812-4dc9-8fa4-5667d5f2339b\") " pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.743652 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-frr-conf\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: E1211 08:36:38.743870 4992 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 11 08:36:38 crc kubenswrapper[4992]: E1211 08:36:38.743945 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-metrics-certs podName:07a93a81-2773-49bc-a345-528d2d52dbd6 nodeName:}" failed. No retries permitted until 2025-12-11 08:36:39.243925889 +0000 UTC m=+823.503399815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-metrics-certs") pod "speaker-lfvx8" (UID: "07a93a81-2773-49bc-a345-528d2d52dbd6") : secret "speaker-certs-secret" not found Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.744133 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-frr-conf\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: E1211 08:36:38.744229 4992 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 08:36:38 crc kubenswrapper[4992]: E1211 08:36:38.744283 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist podName:07a93a81-2773-49bc-a345-528d2d52dbd6 nodeName:}" failed. No retries permitted until 2025-12-11 08:36:39.244265038 +0000 UTC m=+823.503738964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist") pod "speaker-lfvx8" (UID: "07a93a81-2773-49bc-a345-528d2d52dbd6") : secret "metallb-memberlist" not found Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.744342 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-reloader\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.744336 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-frr-sockets\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: E1211 08:36:38.744555 4992 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 11 08:36:38 crc kubenswrapper[4992]: E1211 08:36:38.744685 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cb24660-a51b-4701-a0e2-f4edc25d0960-cert podName:3cb24660-a51b-4701-a0e2-f4edc25d0960 nodeName:}" failed. No retries permitted until 2025-12-11 08:36:39.244657898 +0000 UTC m=+823.504131824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3cb24660-a51b-4701-a0e2-f4edc25d0960-cert") pod "frr-k8s-webhook-server-7784b6fcf-wjv4l" (UID: "3cb24660-a51b-4701-a0e2-f4edc25d0960") : secret "frr-k8s-webhook-server-cert" not found Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.744900 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/34a79bdc-5774-468d-9136-9e03be822975-metrics\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.745190 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/34a79bdc-5774-468d-9136-9e03be822975-frr-startup\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.745570 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/07a93a81-2773-49bc-a345-528d2d52dbd6-metallb-excludel2\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.752588 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34a79bdc-5774-468d-9136-9e03be822975-metrics-certs\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.761252 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78hx9\" (UniqueName: \"kubernetes.io/projected/34a79bdc-5774-468d-9136-9e03be822975-kube-api-access-78hx9\") pod \"frr-k8s-splw9\" (UID: \"34a79bdc-5774-468d-9136-9e03be822975\") " pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.762279 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w2jb\" (UniqueName: \"kubernetes.io/projected/3cb24660-a51b-4701-a0e2-f4edc25d0960-kube-api-access-2w2jb\") pod \"frr-k8s-webhook-server-7784b6fcf-wjv4l\" (UID: \"3cb24660-a51b-4701-a0e2-f4edc25d0960\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.776780 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvzcn\" (UniqueName: \"kubernetes.io/projected/07a93a81-2773-49bc-a345-528d2d52dbd6-kube-api-access-lvzcn\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.830192 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.844802 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b4f416d-3812-4dc9-8fa4-5667d5f2339b-metrics-certs\") pod \"controller-5bddd4b946-g47kr\" (UID: \"8b4f416d-3812-4dc9-8fa4-5667d5f2339b\") " pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.844898 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4g5h\" (UniqueName: \"kubernetes.io/projected/8b4f416d-3812-4dc9-8fa4-5667d5f2339b-kube-api-access-q4g5h\") pod \"controller-5bddd4b946-g47kr\" (UID: \"8b4f416d-3812-4dc9-8fa4-5667d5f2339b\") " pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.844924 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4f416d-3812-4dc9-8fa4-5667d5f2339b-cert\") pod \"controller-5bddd4b946-g47kr\" (UID: \"8b4f416d-3812-4dc9-8fa4-5667d5f2339b\") " pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.849019 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b4f416d-3812-4dc9-8fa4-5667d5f2339b-cert\") pod \"controller-5bddd4b946-g47kr\" (UID: \"8b4f416d-3812-4dc9-8fa4-5667d5f2339b\") " pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.858933 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b4f416d-3812-4dc9-8fa4-5667d5f2339b-metrics-certs\") pod \"controller-5bddd4b946-g47kr\" (UID: \"8b4f416d-3812-4dc9-8fa4-5667d5f2339b\") " pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.867034 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4g5h\" (UniqueName: \"kubernetes.io/projected/8b4f416d-3812-4dc9-8fa4-5667d5f2339b-kube-api-access-q4g5h\") pod \"controller-5bddd4b946-g47kr\" (UID: \"8b4f416d-3812-4dc9-8fa4-5667d5f2339b\") " pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:38 crc kubenswrapper[4992]: I1211 08:36:38.939708 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:39 crc kubenswrapper[4992]: I1211 08:36:39.251328 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3cb24660-a51b-4701-a0e2-f4edc25d0960-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wjv4l\" (UID: \"3cb24660-a51b-4701-a0e2-f4edc25d0960\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:39 crc kubenswrapper[4992]: I1211 08:36:39.251745 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-metrics-certs\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:39 crc kubenswrapper[4992]: I1211 08:36:39.251764 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:39 crc kubenswrapper[4992]: E1211 08:36:39.251878 4992 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 08:36:39 crc kubenswrapper[4992]: E1211 08:36:39.251948 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist podName:07a93a81-2773-49bc-a345-528d2d52dbd6 nodeName:}" failed. No retries permitted until 2025-12-11 08:36:40.251929382 +0000 UTC m=+824.511403308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist") pod "speaker-lfvx8" (UID: "07a93a81-2773-49bc-a345-528d2d52dbd6") : secret "metallb-memberlist" not found Dec 11 08:36:39 crc kubenswrapper[4992]: I1211 08:36:39.258532 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3cb24660-a51b-4701-a0e2-f4edc25d0960-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-wjv4l\" (UID: \"3cb24660-a51b-4701-a0e2-f4edc25d0960\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:39 crc kubenswrapper[4992]: I1211 08:36:39.261227 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-metrics-certs\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:39 crc kubenswrapper[4992]: I1211 08:36:39.370093 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-g47kr"] Dec 11 08:36:39 crc kubenswrapper[4992]: W1211 08:36:39.379784 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b4f416d_3812_4dc9_8fa4_5667d5f2339b.slice/crio-3fbc0983f62239e3d92a33570239095249e4a4d423d21026cece43a4fd52cf48 WatchSource:0}: Error finding container 3fbc0983f62239e3d92a33570239095249e4a4d423d21026cece43a4fd52cf48: Status 404 returned error can't find the container with id 3fbc0983f62239e3d92a33570239095249e4a4d423d21026cece43a4fd52cf48 Dec 11 08:36:39 crc kubenswrapper[4992]: I1211 08:36:39.437178 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:39 crc kubenswrapper[4992]: I1211 08:36:39.663908 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l"] Dec 11 08:36:39 crc kubenswrapper[4992]: W1211 08:36:39.672208 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cb24660_a51b_4701_a0e2_f4edc25d0960.slice/crio-c3f8e4adcb3847a02f825389fc2a5fcadb6e9742397a51f2dc16d6eb0613f9fa WatchSource:0}: Error finding container c3f8e4adcb3847a02f825389fc2a5fcadb6e9742397a51f2dc16d6eb0613f9fa: Status 404 returned error can't find the container with id c3f8e4adcb3847a02f825389fc2a5fcadb6e9742397a51f2dc16d6eb0613f9fa Dec 11 08:36:40 crc kubenswrapper[4992]: I1211 08:36:40.080846 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-g47kr" event={"ID":"8b4f416d-3812-4dc9-8fa4-5667d5f2339b","Type":"ContainerStarted","Data":"3fbc0983f62239e3d92a33570239095249e4a4d423d21026cece43a4fd52cf48"} Dec 11 08:36:40 crc kubenswrapper[4992]: I1211 08:36:40.082480 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" event={"ID":"3cb24660-a51b-4701-a0e2-f4edc25d0960","Type":"ContainerStarted","Data":"c3f8e4adcb3847a02f825389fc2a5fcadb6e9742397a51f2dc16d6eb0613f9fa"} Dec 11 08:36:40 crc kubenswrapper[4992]: I1211 08:36:40.268190 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:40 crc kubenswrapper[4992]: E1211 08:36:40.268402 4992 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 08:36:40 crc kubenswrapper[4992]: E1211 08:36:40.268521 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist podName:07a93a81-2773-49bc-a345-528d2d52dbd6 nodeName:}" failed. No retries permitted until 2025-12-11 08:36:42.268492192 +0000 UTC m=+826.527966158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist") pod "speaker-lfvx8" (UID: "07a93a81-2773-49bc-a345-528d2d52dbd6") : secret "metallb-memberlist" not found Dec 11 08:36:41 crc kubenswrapper[4992]: I1211 08:36:41.088115 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerStarted","Data":"71be85be000d492c6632e73f4947e13efc24dd544dce7539bb8b9f2176cde408"} Dec 11 08:36:41 crc kubenswrapper[4992]: I1211 08:36:41.089669 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-g47kr" event={"ID":"8b4f416d-3812-4dc9-8fa4-5667d5f2339b","Type":"ContainerStarted","Data":"844cb6de214cadbd50c052dc9d684ecaa20a5e5f7fbca9542bd17cd9a8412bcd"} Dec 11 08:36:41 crc kubenswrapper[4992]: I1211 08:36:41.089703 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-g47kr" event={"ID":"8b4f416d-3812-4dc9-8fa4-5667d5f2339b","Type":"ContainerStarted","Data":"80349686eca08a72edbca54445834c9b0412838b145d78c54fa4721241fb6ab9"} Dec 11 08:36:41 crc kubenswrapper[4992]: I1211 08:36:41.089870 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:41 crc kubenswrapper[4992]: I1211 08:36:41.109897 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-g47kr" podStartSLOduration=3.109875529 podStartE2EDuration="3.109875529s" podCreationTimestamp="2025-12-11 08:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:36:41.108024594 +0000 UTC m=+825.367498520" watchObservedRunningTime="2025-12-11 08:36:41.109875529 +0000 UTC m=+825.369349455" Dec 11 08:36:42 crc kubenswrapper[4992]: I1211 08:36:42.297051 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:42 crc kubenswrapper[4992]: I1211 08:36:42.303030 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/07a93a81-2773-49bc-a345-528d2d52dbd6-memberlist\") pod \"speaker-lfvx8\" (UID: \"07a93a81-2773-49bc-a345-528d2d52dbd6\") " pod="metallb-system/speaker-lfvx8" Dec 11 08:36:42 crc kubenswrapper[4992]: I1211 08:36:42.523854 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lfvx8" Dec 11 08:36:42 crc kubenswrapper[4992]: W1211 08:36:42.557995 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07a93a81_2773_49bc_a345_528d2d52dbd6.slice/crio-4078a60b9da323b59b314c5e78c2fbf9bf4903f0d6274bb8d46a17ae78c2c53c WatchSource:0}: Error finding container 4078a60b9da323b59b314c5e78c2fbf9bf4903f0d6274bb8d46a17ae78c2c53c: Status 404 returned error can't find the container with id 4078a60b9da323b59b314c5e78c2fbf9bf4903f0d6274bb8d46a17ae78c2c53c Dec 11 08:36:43 crc kubenswrapper[4992]: I1211 08:36:43.116623 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lfvx8" event={"ID":"07a93a81-2773-49bc-a345-528d2d52dbd6","Type":"ContainerStarted","Data":"49af6444f445a379fca43faec5bc403df1341c100acabdb3cf8217b6fa21d643"} Dec 11 08:36:43 crc kubenswrapper[4992]: I1211 08:36:43.117029 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lfvx8" event={"ID":"07a93a81-2773-49bc-a345-528d2d52dbd6","Type":"ContainerStarted","Data":"4078a60b9da323b59b314c5e78c2fbf9bf4903f0d6274bb8d46a17ae78c2c53c"} Dec 11 08:36:44 crc kubenswrapper[4992]: I1211 08:36:44.123360 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lfvx8" event={"ID":"07a93a81-2773-49bc-a345-528d2d52dbd6","Type":"ContainerStarted","Data":"b16bc22bf60a035406c4198edf6552244d11f4a0137e22da902837a98d83fe8a"} Dec 11 08:36:44 crc kubenswrapper[4992]: I1211 08:36:44.124023 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lfvx8" Dec 11 08:36:44 crc kubenswrapper[4992]: I1211 08:36:44.139426 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lfvx8" podStartSLOduration=6.139406551 podStartE2EDuration="6.139406551s" podCreationTimestamp="2025-12-11 08:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:36:44.138391895 +0000 UTC m=+828.397865831" watchObservedRunningTime="2025-12-11 08:36:44.139406551 +0000 UTC m=+828.398880497" Dec 11 08:36:49 crc kubenswrapper[4992]: I1211 08:36:49.168123 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" event={"ID":"3cb24660-a51b-4701-a0e2-f4edc25d0960","Type":"ContainerStarted","Data":"09b2d531d58546f13d5b15f90d556a6a842ba08ffcddce8dffb74aa285c6dda1"} Dec 11 08:36:49 crc kubenswrapper[4992]: I1211 08:36:49.169740 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:49 crc kubenswrapper[4992]: I1211 08:36:49.171606 4992 generic.go:334] "Generic (PLEG): container finished" podID="34a79bdc-5774-468d-9136-9e03be822975" containerID="11e253289e802eba1d78110faa6e97dd46a88e97f03d4eb3611ebd5be9fcde1d" exitCode=0 Dec 11 08:36:49 crc kubenswrapper[4992]: I1211 08:36:49.171655 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerDied","Data":"11e253289e802eba1d78110faa6e97dd46a88e97f03d4eb3611ebd5be9fcde1d"} Dec 11 08:36:49 crc kubenswrapper[4992]: I1211 08:36:49.187336 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" podStartSLOduration=2.438716242 podStartE2EDuration="11.187322067s" podCreationTimestamp="2025-12-11 08:36:38 +0000 UTC" firstStartedPulling="2025-12-11 08:36:39.674296903 +0000 UTC m=+823.933770829" lastFinishedPulling="2025-12-11 08:36:48.422902728 +0000 UTC m=+832.682376654" observedRunningTime="2025-12-11 08:36:49.186234361 +0000 UTC m=+833.445708317" watchObservedRunningTime="2025-12-11 08:36:49.187322067 +0000 UTC m=+833.446795993" Dec 11 08:36:50 crc kubenswrapper[4992]: I1211 08:36:50.181330 4992 generic.go:334] "Generic (PLEG): container finished" podID="34a79bdc-5774-468d-9136-9e03be822975" containerID="c517f0122c4d0a567cb601c97b4a8c349a9b1843adcae6ac1f2c2d2c137ea8d3" exitCode=0 Dec 11 08:36:50 crc kubenswrapper[4992]: I1211 08:36:50.181409 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerDied","Data":"c517f0122c4d0a567cb601c97b4a8c349a9b1843adcae6ac1f2c2d2c137ea8d3"} Dec 11 08:36:51 crc kubenswrapper[4992]: I1211 08:36:51.189365 4992 generic.go:334] "Generic (PLEG): container finished" podID="34a79bdc-5774-468d-9136-9e03be822975" containerID="0b5f58073d7c898311cfcca537017f9eea813c8d6f03384a6b473a49bf7a2e9a" exitCode=0 Dec 11 08:36:51 crc kubenswrapper[4992]: I1211 08:36:51.189445 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerDied","Data":"0b5f58073d7c898311cfcca537017f9eea813c8d6f03384a6b473a49bf7a2e9a"} Dec 11 08:36:52 crc kubenswrapper[4992]: I1211 08:36:52.200213 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerStarted","Data":"b50e1f6bbfe99a578be46a924f10c001eae4652d05e57ec738aea7d5b607b219"} Dec 11 08:36:52 crc kubenswrapper[4992]: I1211 08:36:52.200536 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerStarted","Data":"6948a521593a52da2f2a17cd06a785425ff7d455292cb75347034f233fb7af60"} Dec 11 08:36:52 crc kubenswrapper[4992]: I1211 08:36:52.200551 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerStarted","Data":"aba203cbe984750b96d826091c70884c7d4d1c147ae3d2b10ecfac53a01494cd"} Dec 11 08:36:52 crc kubenswrapper[4992]: I1211 08:36:52.200564 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerStarted","Data":"bff7f8c7b17c558d2db315fd8e281a26a6d27c773d012fbc041dade565fd2966"} Dec 11 08:36:52 crc kubenswrapper[4992]: I1211 08:36:52.528910 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lfvx8" Dec 11 08:36:53 crc kubenswrapper[4992]: I1211 08:36:53.210970 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerStarted","Data":"d4c1c72aacecb9f37db9e7f8da04c28169756faac807ab6434dcf0885f06f4c4"} Dec 11 08:36:53 crc kubenswrapper[4992]: I1211 08:36:53.211320 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-splw9" event={"ID":"34a79bdc-5774-468d-9136-9e03be822975","Type":"ContainerStarted","Data":"8f526a2aaaad973dd7263adc9e3f758a5692211de245a8f929046c46391dc72f"} Dec 11 08:36:53 crc kubenswrapper[4992]: I1211 08:36:53.211523 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:53 crc kubenswrapper[4992]: I1211 08:36:53.238829 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-splw9" podStartSLOduration=7.236322663 podStartE2EDuration="15.238804471s" podCreationTimestamp="2025-12-11 08:36:38 +0000 UTC" firstStartedPulling="2025-12-11 08:36:40.395770112 +0000 UTC m=+824.655244038" lastFinishedPulling="2025-12-11 08:36:48.39825192 +0000 UTC m=+832.657725846" observedRunningTime="2025-12-11 08:36:53.232730792 +0000 UTC m=+837.492204728" watchObservedRunningTime="2025-12-11 08:36:53.238804471 +0000 UTC m=+837.498278417" Dec 11 08:36:53 crc kubenswrapper[4992]: I1211 08:36:53.830507 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:53 crc kubenswrapper[4992]: I1211 08:36:53.865220 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-splw9" Dec 11 08:36:58 crc kubenswrapper[4992]: I1211 08:36:58.840305 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9hxg5"] Dec 11 08:36:58 crc kubenswrapper[4992]: I1211 08:36:58.841724 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9hxg5" Dec 11 08:36:58 crc kubenswrapper[4992]: I1211 08:36:58.847524 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 11 08:36:58 crc kubenswrapper[4992]: I1211 08:36:58.847674 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 11 08:36:58 crc kubenswrapper[4992]: I1211 08:36:58.847772 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rlkfr" Dec 11 08:36:58 crc kubenswrapper[4992]: I1211 08:36:58.853167 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9hxg5"] Dec 11 08:36:58 crc kubenswrapper[4992]: I1211 08:36:58.931664 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-965h2\" (UniqueName: \"kubernetes.io/projected/0e6788b5-5448-4592-ade4-a60ba5ef9038-kube-api-access-965h2\") pod \"openstack-operator-index-9hxg5\" (UID: \"0e6788b5-5448-4592-ade4-a60ba5ef9038\") " pod="openstack-operators/openstack-operator-index-9hxg5" Dec 11 08:36:58 crc kubenswrapper[4992]: I1211 08:36:58.943800 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-g47kr" Dec 11 08:36:59 crc kubenswrapper[4992]: I1211 08:36:59.032835 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-965h2\" (UniqueName: \"kubernetes.io/projected/0e6788b5-5448-4592-ade4-a60ba5ef9038-kube-api-access-965h2\") pod \"openstack-operator-index-9hxg5\" (UID: \"0e6788b5-5448-4592-ade4-a60ba5ef9038\") " pod="openstack-operators/openstack-operator-index-9hxg5" Dec 11 08:36:59 crc kubenswrapper[4992]: I1211 08:36:59.056594 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-965h2\" (UniqueName: \"kubernetes.io/projected/0e6788b5-5448-4592-ade4-a60ba5ef9038-kube-api-access-965h2\") pod \"openstack-operator-index-9hxg5\" (UID: \"0e6788b5-5448-4592-ade4-a60ba5ef9038\") " pod="openstack-operators/openstack-operator-index-9hxg5" Dec 11 08:36:59 crc kubenswrapper[4992]: I1211 08:36:59.168402 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9hxg5" Dec 11 08:36:59 crc kubenswrapper[4992]: I1211 08:36:59.444097 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-wjv4l" Dec 11 08:36:59 crc kubenswrapper[4992]: I1211 08:36:59.661869 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9hxg5"] Dec 11 08:36:59 crc kubenswrapper[4992]: W1211 08:36:59.666707 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e6788b5_5448_4592_ade4_a60ba5ef9038.slice/crio-eda2920342f6ee74990965a39e507e29c3dcfa6f6fffddb100c201db789ef482 WatchSource:0}: Error finding container eda2920342f6ee74990965a39e507e29c3dcfa6f6fffddb100c201db789ef482: Status 404 returned error can't find the container with id eda2920342f6ee74990965a39e507e29c3dcfa6f6fffddb100c201db789ef482 Dec 11 08:37:00 crc kubenswrapper[4992]: I1211 08:37:00.257317 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9hxg5" event={"ID":"0e6788b5-5448-4592-ade4-a60ba5ef9038","Type":"ContainerStarted","Data":"eda2920342f6ee74990965a39e507e29c3dcfa6f6fffddb100c201db789ef482"} Dec 11 08:37:03 crc kubenswrapper[4992]: I1211 08:37:03.279172 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9hxg5" event={"ID":"0e6788b5-5448-4592-ade4-a60ba5ef9038","Type":"ContainerStarted","Data":"cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b"} Dec 11 08:37:03 crc kubenswrapper[4992]: I1211 08:37:03.299509 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9hxg5" podStartSLOduration=1.948204918 podStartE2EDuration="5.299494147s" podCreationTimestamp="2025-12-11 08:36:58 +0000 UTC" firstStartedPulling="2025-12-11 08:36:59.669170884 +0000 UTC m=+843.928644810" lastFinishedPulling="2025-12-11 08:37:03.020460113 +0000 UTC m=+847.279934039" observedRunningTime="2025-12-11 08:37:03.295839277 +0000 UTC m=+847.555313233" watchObservedRunningTime="2025-12-11 08:37:03.299494147 +0000 UTC m=+847.558968073" Dec 11 08:37:04 crc kubenswrapper[4992]: I1211 08:37:04.016368 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9hxg5"] Dec 11 08:37:04 crc kubenswrapper[4992]: I1211 08:37:04.626029 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gzhqj"] Dec 11 08:37:04 crc kubenswrapper[4992]: I1211 08:37:04.626807 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gzhqj" Dec 11 08:37:04 crc kubenswrapper[4992]: I1211 08:37:04.641070 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gzhqj"] Dec 11 08:37:04 crc kubenswrapper[4992]: I1211 08:37:04.723294 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snhxn\" (UniqueName: \"kubernetes.io/projected/36fe56e4-3db8-4d2a-8fe4-bfac398f3d92-kube-api-access-snhxn\") pod \"openstack-operator-index-gzhqj\" (UID: \"36fe56e4-3db8-4d2a-8fe4-bfac398f3d92\") " pod="openstack-operators/openstack-operator-index-gzhqj" Dec 11 08:37:04 crc kubenswrapper[4992]: I1211 08:37:04.825089 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snhxn\" (UniqueName: \"kubernetes.io/projected/36fe56e4-3db8-4d2a-8fe4-bfac398f3d92-kube-api-access-snhxn\") pod \"openstack-operator-index-gzhqj\" (UID: \"36fe56e4-3db8-4d2a-8fe4-bfac398f3d92\") " pod="openstack-operators/openstack-operator-index-gzhqj" Dec 11 08:37:04 crc kubenswrapper[4992]: I1211 08:37:04.844919 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snhxn\" (UniqueName: \"kubernetes.io/projected/36fe56e4-3db8-4d2a-8fe4-bfac398f3d92-kube-api-access-snhxn\") pod \"openstack-operator-index-gzhqj\" (UID: \"36fe56e4-3db8-4d2a-8fe4-bfac398f3d92\") " pod="openstack-operators/openstack-operator-index-gzhqj" Dec 11 08:37:04 crc kubenswrapper[4992]: I1211 08:37:04.947895 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gzhqj" Dec 11 08:37:05 crc kubenswrapper[4992]: I1211 08:37:05.290324 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9hxg5" podUID="0e6788b5-5448-4592-ade4-a60ba5ef9038" containerName="registry-server" containerID="cri-o://cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b" gracePeriod=2 Dec 11 08:37:05 crc kubenswrapper[4992]: I1211 08:37:05.360034 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gzhqj"] Dec 11 08:37:05 crc kubenswrapper[4992]: I1211 08:37:05.619980 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9hxg5" Dec 11 08:37:05 crc kubenswrapper[4992]: I1211 08:37:05.740522 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-965h2\" (UniqueName: \"kubernetes.io/projected/0e6788b5-5448-4592-ade4-a60ba5ef9038-kube-api-access-965h2\") pod \"0e6788b5-5448-4592-ade4-a60ba5ef9038\" (UID: \"0e6788b5-5448-4592-ade4-a60ba5ef9038\") " Dec 11 08:37:05 crc kubenswrapper[4992]: I1211 08:37:05.745460 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e6788b5-5448-4592-ade4-a60ba5ef9038-kube-api-access-965h2" (OuterVolumeSpecName: "kube-api-access-965h2") pod "0e6788b5-5448-4592-ade4-a60ba5ef9038" (UID: "0e6788b5-5448-4592-ade4-a60ba5ef9038"). InnerVolumeSpecName "kube-api-access-965h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:37:05 crc kubenswrapper[4992]: I1211 08:37:05.842490 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-965h2\" (UniqueName: \"kubernetes.io/projected/0e6788b5-5448-4592-ade4-a60ba5ef9038-kube-api-access-965h2\") on node \"crc\" DevicePath \"\"" Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.300068 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gzhqj" event={"ID":"36fe56e4-3db8-4d2a-8fe4-bfac398f3d92","Type":"ContainerStarted","Data":"8e80f6e5b6026066b4d1523b641cdcb3c40bbe7677ac6902640cea976e8f84ab"} Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.300126 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gzhqj" event={"ID":"36fe56e4-3db8-4d2a-8fe4-bfac398f3d92","Type":"ContainerStarted","Data":"87f1ce5625b651509dc9cb0c523b70864960c4276fca58202798284e3f07ee32"} Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.302161 4992 generic.go:334] "Generic (PLEG): container finished" podID="0e6788b5-5448-4592-ade4-a60ba5ef9038" containerID="cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b" exitCode=0 Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.302190 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9hxg5" event={"ID":"0e6788b5-5448-4592-ade4-a60ba5ef9038","Type":"ContainerDied","Data":"cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b"} Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.302216 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9hxg5" event={"ID":"0e6788b5-5448-4592-ade4-a60ba5ef9038","Type":"ContainerDied","Data":"eda2920342f6ee74990965a39e507e29c3dcfa6f6fffddb100c201db789ef482"} Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.302241 4992 scope.go:117] "RemoveContainer" containerID="cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b" Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.302254 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9hxg5" Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.316894 4992 scope.go:117] "RemoveContainer" containerID="cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b" Dec 11 08:37:06 crc kubenswrapper[4992]: E1211 08:37:06.317393 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b\": container with ID starting with cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b not found: ID does not exist" containerID="cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b" Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.317497 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b"} err="failed to get container status \"cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b\": rpc error: code = NotFound desc = could not find container \"cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b\": container with ID starting with cd70fe8b04eae71a49854950e56102deb9f7771eaebf8807c7df42a3ff514f5b not found: ID does not exist" Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.318274 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gzhqj" podStartSLOduration=2.257097944 podStartE2EDuration="2.318257492s" podCreationTimestamp="2025-12-11 08:37:04 +0000 UTC" firstStartedPulling="2025-12-11 08:37:05.405437633 +0000 UTC m=+849.664911559" lastFinishedPulling="2025-12-11 08:37:05.466597181 +0000 UTC m=+849.726071107" observedRunningTime="2025-12-11 08:37:06.315754151 +0000 UTC m=+850.575228077" watchObservedRunningTime="2025-12-11 08:37:06.318257492 +0000 UTC m=+850.577731418" Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.331666 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9hxg5"] Dec 11 08:37:06 crc kubenswrapper[4992]: I1211 08:37:06.338713 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9hxg5"] Dec 11 08:37:08 crc kubenswrapper[4992]: I1211 08:37:08.102582 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e6788b5-5448-4592-ade4-a60ba5ef9038" path="/var/lib/kubelet/pods/0e6788b5-5448-4592-ade4-a60ba5ef9038/volumes" Dec 11 08:37:08 crc kubenswrapper[4992]: I1211 08:37:08.833358 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-splw9" Dec 11 08:37:14 crc kubenswrapper[4992]: I1211 08:37:14.948296 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gzhqj" Dec 11 08:37:14 crc kubenswrapper[4992]: I1211 08:37:14.949249 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gzhqj" Dec 11 08:37:14 crc kubenswrapper[4992]: I1211 08:37:14.983484 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gzhqj" Dec 11 08:37:15 crc kubenswrapper[4992]: I1211 08:37:15.386958 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gzhqj" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.471300 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n"] Dec 11 08:37:29 crc kubenswrapper[4992]: E1211 08:37:29.472069 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6788b5-5448-4592-ade4-a60ba5ef9038" containerName="registry-server" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.472083 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6788b5-5448-4592-ade4-a60ba5ef9038" containerName="registry-server" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.472210 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e6788b5-5448-4592-ade4-a60ba5ef9038" containerName="registry-server" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.473264 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.485053 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n"] Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.487398 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xdjk7" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.578532 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-bundle\") pod \"2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.578695 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chpxw\" (UniqueName: \"kubernetes.io/projected/6fef3605-0f1b-4298-b43e-13d68847c03f-kube-api-access-chpxw\") pod \"2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.578726 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-util\") pod \"2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.679546 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chpxw\" (UniqueName: \"kubernetes.io/projected/6fef3605-0f1b-4298-b43e-13d68847c03f-kube-api-access-chpxw\") pod \"2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.679621 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-util\") pod \"2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.679678 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-bundle\") pod \"2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.680132 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-bundle\") pod \"2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.680285 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-util\") pod \"2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.702521 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chpxw\" (UniqueName: \"kubernetes.io/projected/6fef3605-0f1b-4298-b43e-13d68847c03f-kube-api-access-chpxw\") pod \"2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:29 crc kubenswrapper[4992]: I1211 08:37:29.807662 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:30 crc kubenswrapper[4992]: I1211 08:37:30.051816 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n"] Dec 11 08:37:30 crc kubenswrapper[4992]: I1211 08:37:30.463372 4992 generic.go:334] "Generic (PLEG): container finished" podID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerID="00c31b2c770c7ab7ed59115418e9d32b07808070d8a2e57ddbc054442f5e03c3" exitCode=0 Dec 11 08:37:30 crc kubenswrapper[4992]: I1211 08:37:30.463454 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" event={"ID":"6fef3605-0f1b-4298-b43e-13d68847c03f","Type":"ContainerDied","Data":"00c31b2c770c7ab7ed59115418e9d32b07808070d8a2e57ddbc054442f5e03c3"} Dec 11 08:37:30 crc kubenswrapper[4992]: I1211 08:37:30.463864 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" event={"ID":"6fef3605-0f1b-4298-b43e-13d68847c03f","Type":"ContainerStarted","Data":"01a613e9217d6848181aca3a1dbd84675056834e6a80524e53525735e9da649b"} Dec 11 08:37:31 crc kubenswrapper[4992]: I1211 08:37:31.472759 4992 generic.go:334] "Generic (PLEG): container finished" podID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerID="d486bd62723d7af7800025e72d9b2037eb8a51128b806688d36386adcd46ca3f" exitCode=0 Dec 11 08:37:31 crc kubenswrapper[4992]: I1211 08:37:31.472838 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" event={"ID":"6fef3605-0f1b-4298-b43e-13d68847c03f","Type":"ContainerDied","Data":"d486bd62723d7af7800025e72d9b2037eb8a51128b806688d36386adcd46ca3f"} Dec 11 08:37:32 crc kubenswrapper[4992]: I1211 08:37:32.483346 4992 generic.go:334] "Generic (PLEG): container finished" podID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerID="c74ced5f38105090af4fb858dcfbb2017bad32077577a6864230a6128e576b29" exitCode=0 Dec 11 08:37:32 crc kubenswrapper[4992]: I1211 08:37:32.483402 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" event={"ID":"6fef3605-0f1b-4298-b43e-13d68847c03f","Type":"ContainerDied","Data":"c74ced5f38105090af4fb858dcfbb2017bad32077577a6864230a6128e576b29"} Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.769390 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.867420 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chpxw\" (UniqueName: \"kubernetes.io/projected/6fef3605-0f1b-4298-b43e-13d68847c03f-kube-api-access-chpxw\") pod \"6fef3605-0f1b-4298-b43e-13d68847c03f\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.867488 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-util\") pod \"6fef3605-0f1b-4298-b43e-13d68847c03f\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.867543 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-bundle\") pod \"6fef3605-0f1b-4298-b43e-13d68847c03f\" (UID: \"6fef3605-0f1b-4298-b43e-13d68847c03f\") " Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.868685 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-bundle" (OuterVolumeSpecName: "bundle") pod "6fef3605-0f1b-4298-b43e-13d68847c03f" (UID: "6fef3605-0f1b-4298-b43e-13d68847c03f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.880272 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fef3605-0f1b-4298-b43e-13d68847c03f-kube-api-access-chpxw" (OuterVolumeSpecName: "kube-api-access-chpxw") pod "6fef3605-0f1b-4298-b43e-13d68847c03f" (UID: "6fef3605-0f1b-4298-b43e-13d68847c03f"). InnerVolumeSpecName "kube-api-access-chpxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.900522 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-util" (OuterVolumeSpecName: "util") pod "6fef3605-0f1b-4298-b43e-13d68847c03f" (UID: "6fef3605-0f1b-4298-b43e-13d68847c03f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.969735 4992 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.969821 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chpxw\" (UniqueName: \"kubernetes.io/projected/6fef3605-0f1b-4298-b43e-13d68847c03f-kube-api-access-chpxw\") on node \"crc\" DevicePath \"\"" Dec 11 08:37:33 crc kubenswrapper[4992]: I1211 08:37:33.969834 4992 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fef3605-0f1b-4298-b43e-13d68847c03f-util\") on node \"crc\" DevicePath \"\"" Dec 11 08:37:34 crc kubenswrapper[4992]: I1211 08:37:34.498721 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" event={"ID":"6fef3605-0f1b-4298-b43e-13d68847c03f","Type":"ContainerDied","Data":"01a613e9217d6848181aca3a1dbd84675056834e6a80524e53525735e9da649b"} Dec 11 08:37:34 crc kubenswrapper[4992]: I1211 08:37:34.499196 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a613e9217d6848181aca3a1dbd84675056834e6a80524e53525735e9da649b" Dec 11 08:37:34 crc kubenswrapper[4992]: I1211 08:37:34.498860 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n" Dec 11 08:37:41 crc kubenswrapper[4992]: I1211 08:37:41.869668 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk"] Dec 11 08:37:41 crc kubenswrapper[4992]: E1211 08:37:41.870359 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerName="util" Dec 11 08:37:41 crc kubenswrapper[4992]: I1211 08:37:41.870370 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerName="util" Dec 11 08:37:41 crc kubenswrapper[4992]: E1211 08:37:41.870392 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerName="pull" Dec 11 08:37:41 crc kubenswrapper[4992]: I1211 08:37:41.870398 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerName="pull" Dec 11 08:37:41 crc kubenswrapper[4992]: E1211 08:37:41.870407 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerName="extract" Dec 11 08:37:41 crc kubenswrapper[4992]: I1211 08:37:41.870413 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerName="extract" Dec 11 08:37:41 crc kubenswrapper[4992]: I1211 08:37:41.870516 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fef3605-0f1b-4298-b43e-13d68847c03f" containerName="extract" Dec 11 08:37:41 crc kubenswrapper[4992]: I1211 08:37:41.870937 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" Dec 11 08:37:41 crc kubenswrapper[4992]: I1211 08:37:41.880954 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68h7x" Dec 11 08:37:41 crc kubenswrapper[4992]: I1211 08:37:41.905379 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk"] Dec 11 08:37:41 crc kubenswrapper[4992]: I1211 08:37:41.990799 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6hq\" (UniqueName: \"kubernetes.io/projected/442cb00a-6225-47e0-a88d-6d615414e5a4-kube-api-access-wf6hq\") pod \"openstack-operator-controller-operator-94cdf5849-dv4jk\" (UID: \"442cb00a-6225-47e0-a88d-6d615414e5a4\") " pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" Dec 11 08:37:42 crc kubenswrapper[4992]: I1211 08:37:42.092026 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6hq\" (UniqueName: \"kubernetes.io/projected/442cb00a-6225-47e0-a88d-6d615414e5a4-kube-api-access-wf6hq\") pod \"openstack-operator-controller-operator-94cdf5849-dv4jk\" (UID: \"442cb00a-6225-47e0-a88d-6d615414e5a4\") " pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" Dec 11 08:37:42 crc kubenswrapper[4992]: I1211 08:37:42.132077 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6hq\" (UniqueName: \"kubernetes.io/projected/442cb00a-6225-47e0-a88d-6d615414e5a4-kube-api-access-wf6hq\") pod \"openstack-operator-controller-operator-94cdf5849-dv4jk\" (UID: \"442cb00a-6225-47e0-a88d-6d615414e5a4\") " pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" Dec 11 08:37:42 crc kubenswrapper[4992]: I1211 08:37:42.189418 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" Dec 11 08:37:42 crc kubenswrapper[4992]: I1211 08:37:42.647782 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk"] Dec 11 08:37:43 crc kubenswrapper[4992]: I1211 08:37:43.554182 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" event={"ID":"442cb00a-6225-47e0-a88d-6d615414e5a4","Type":"ContainerStarted","Data":"d2656e79099d070f29adc58437bb81783b3ea5457a7c883d64759d4e4d8d9f98"} Dec 11 08:37:48 crc kubenswrapper[4992]: I1211 08:37:48.594724 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" event={"ID":"442cb00a-6225-47e0-a88d-6d615414e5a4","Type":"ContainerStarted","Data":"d05315d3372ed607a0824e6fa5f5c9f0f18e7dacb2a75a99d58073e6dba3c6ec"} Dec 11 08:37:48 crc kubenswrapper[4992]: I1211 08:37:48.595273 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" Dec 11 08:37:48 crc kubenswrapper[4992]: I1211 08:37:48.637660 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" podStartSLOduration=2.827529148 podStartE2EDuration="7.637644169s" podCreationTimestamp="2025-12-11 08:37:41 +0000 UTC" firstStartedPulling="2025-12-11 08:37:42.654466038 +0000 UTC m=+886.913939964" lastFinishedPulling="2025-12-11 08:37:47.464581059 +0000 UTC m=+891.724054985" observedRunningTime="2025-12-11 08:37:48.63446969 +0000 UTC m=+892.893943616" watchObservedRunningTime="2025-12-11 08:37:48.637644169 +0000 UTC m=+892.897118095" Dec 11 08:37:52 crc kubenswrapper[4992]: I1211 08:37:52.193256 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-94cdf5849-dv4jk" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.480424 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.482373 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.484863 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.489669 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-skjq4" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.490072 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.493988 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zbsqv" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.508837 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzpq\" (UniqueName: \"kubernetes.io/projected/a84e6e65-9f83-405a-a478-a53e125d5845-kube-api-access-4pzpq\") pod \"barbican-operator-controller-manager-7d9dfd778-l5mc5\" (UID: \"a84e6e65-9f83-405a-a478-a53e125d5845\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.508887 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clsx\" (UniqueName: \"kubernetes.io/projected/07859dd8-8995-4214-8ee9-6648fa5a292e-kube-api-access-5clsx\") pod \"cinder-operator-controller-manager-6c677c69b-55h47\" (UID: \"07859dd8-8995-4214-8ee9-6648fa5a292e\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.511688 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.526804 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.530972 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.531997 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.534705 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kfx5s" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.549282 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.554713 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.556036 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.559803 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-f9m4v" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.574735 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.575806 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.582656 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-chmj9" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.591458 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.611557 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pzpq\" (UniqueName: \"kubernetes.io/projected/a84e6e65-9f83-405a-a478-a53e125d5845-kube-api-access-4pzpq\") pod \"barbican-operator-controller-manager-7d9dfd778-l5mc5\" (UID: \"a84e6e65-9f83-405a-a478-a53e125d5845\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.611658 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clsx\" (UniqueName: \"kubernetes.io/projected/07859dd8-8995-4214-8ee9-6648fa5a292e-kube-api-access-5clsx\") pod \"cinder-operator-controller-manager-6c677c69b-55h47\" (UID: \"07859dd8-8995-4214-8ee9-6648fa5a292e\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.625982 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.629156 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.644288 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.651270 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-p6bbh" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.703082 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clsx\" (UniqueName: \"kubernetes.io/projected/07859dd8-8995-4214-8ee9-6648fa5a292e-kube-api-access-5clsx\") pod \"cinder-operator-controller-manager-6c677c69b-55h47\" (UID: \"07859dd8-8995-4214-8ee9-6648fa5a292e\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.703127 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pzpq\" (UniqueName: \"kubernetes.io/projected/a84e6e65-9f83-405a-a478-a53e125d5845-kube-api-access-4pzpq\") pod \"barbican-operator-controller-manager-7d9dfd778-l5mc5\" (UID: \"a84e6e65-9f83-405a-a478-a53e125d5845\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.707510 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.708566 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.713808 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6xsc\" (UniqueName: \"kubernetes.io/projected/e501b125-ca5e-41f0-88c1-a9fda63de236-kube-api-access-k6xsc\") pod \"heat-operator-controller-manager-5f64f6f8bb-hsldx\" (UID: \"e501b125-ca5e-41f0-88c1-a9fda63de236\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.713903 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2lq5\" (UniqueName: \"kubernetes.io/projected/42106600-d00d-477a-aaec-102ba03cb5c6-kube-api-access-j2lq5\") pod \"glance-operator-controller-manager-5697bb5779-bdp4x\" (UID: \"42106600-d00d-477a-aaec-102ba03cb5c6\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.713929 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmxr\" (UniqueName: \"kubernetes.io/projected/a39ad598-19ba-42cb-9f35-538b68de7b04-kube-api-access-pfmxr\") pod \"designate-operator-controller-manager-697fb699cf-6gl95\" (UID: \"a39ad598-19ba-42cb-9f35-538b68de7b04\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.714000 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gtd4s" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.714137 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.727025 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-r792s"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.728331 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.739454 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qb5j9" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.786506 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.786560 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.802690 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.803924 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.809923 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-r792s"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.810163 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fqzbc" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.810915 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.814990 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srj6c\" (UniqueName: \"kubernetes.io/projected/b4d8b09b-a162-43bd-a91f-dc87e5c9c956-kube-api-access-srj6c\") pod \"horizon-operator-controller-manager-68c6d99b8f-jqlq7\" (UID: \"b4d8b09b-a162-43bd-a91f-dc87e5c9c956\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.815115 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2lq5\" (UniqueName: \"kubernetes.io/projected/42106600-d00d-477a-aaec-102ba03cb5c6-kube-api-access-j2lq5\") pod \"glance-operator-controller-manager-5697bb5779-bdp4x\" (UID: \"42106600-d00d-477a-aaec-102ba03cb5c6\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.815194 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmxr\" (UniqueName: \"kubernetes.io/projected/a39ad598-19ba-42cb-9f35-538b68de7b04-kube-api-access-pfmxr\") pod \"designate-operator-controller-manager-697fb699cf-6gl95\" (UID: \"a39ad598-19ba-42cb-9f35-538b68de7b04\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.815303 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.815392 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6xsc\" (UniqueName: \"kubernetes.io/projected/e501b125-ca5e-41f0-88c1-a9fda63de236-kube-api-access-k6xsc\") pod \"heat-operator-controller-manager-5f64f6f8bb-hsldx\" (UID: \"e501b125-ca5e-41f0-88c1-a9fda63de236\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.815471 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r267d\" (UniqueName: \"kubernetes.io/projected/fc892dae-199a-49ca-8ddd-863a6b8426d7-kube-api-access-r267d\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.850687 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.859094 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.861572 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.862724 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmxr\" (UniqueName: \"kubernetes.io/projected/a39ad598-19ba-42cb-9f35-538b68de7b04-kube-api-access-pfmxr\") pod \"designate-operator-controller-manager-697fb699cf-6gl95\" (UID: \"a39ad598-19ba-42cb-9f35-538b68de7b04\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.863062 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.869386 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2lq5\" (UniqueName: \"kubernetes.io/projected/42106600-d00d-477a-aaec-102ba03cb5c6-kube-api-access-j2lq5\") pod \"glance-operator-controller-manager-5697bb5779-bdp4x\" (UID: \"42106600-d00d-477a-aaec-102ba03cb5c6\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.875027 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.876031 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6xsc\" (UniqueName: \"kubernetes.io/projected/e501b125-ca5e-41f0-88c1-a9fda63de236-kube-api-access-k6xsc\") pod \"heat-operator-controller-manager-5f64f6f8bb-hsldx\" (UID: \"e501b125-ca5e-41f0-88c1-a9fda63de236\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.876322 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-578g5" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.878683 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.907957 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.909027 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.915058 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.916212 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r267d\" (UniqueName: \"kubernetes.io/projected/fc892dae-199a-49ca-8ddd-863a6b8426d7-kube-api-access-r267d\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.916337 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srj6c\" (UniqueName: \"kubernetes.io/projected/b4d8b09b-a162-43bd-a91f-dc87e5c9c956-kube-api-access-srj6c\") pod \"horizon-operator-controller-manager-68c6d99b8f-jqlq7\" (UID: \"b4d8b09b-a162-43bd-a91f-dc87e5c9c956\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.916461 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzp4x\" (UniqueName: \"kubernetes.io/projected/74f7d667-67f0-459b-a7a0-f46c0e095485-kube-api-access-mzp4x\") pod \"keystone-operator-controller-manager-7765d96ddf-l6gnl\" (UID: \"74f7d667-67f0-459b-a7a0-f46c0e095485\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.916570 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.916692 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d245l\" (UniqueName: \"kubernetes.io/projected/2a91887f-977b-43dd-b638-0391348bf5d7-kube-api-access-d245l\") pod \"ironic-operator-controller-manager-967d97867-r792s\" (UID: \"2a91887f-977b-43dd-b638-0391348bf5d7\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" Dec 11 08:38:17 crc kubenswrapper[4992]: E1211 08:38:17.917393 4992 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:17 crc kubenswrapper[4992]: E1211 08:38:17.917522 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert podName:fc892dae-199a-49ca-8ddd-863a6b8426d7 nodeName:}" failed. No retries permitted until 2025-12-11 08:38:18.417506229 +0000 UTC m=+922.676980155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert") pod "infra-operator-controller-manager-78d48bff9d-hd9fc" (UID: "fc892dae-199a-49ca-8ddd-863a6b8426d7") : secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.919589 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4dw49" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.948317 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srj6c\" (UniqueName: \"kubernetes.io/projected/b4d8b09b-a162-43bd-a91f-dc87e5c9c956-kube-api-access-srj6c\") pod \"horizon-operator-controller-manager-68c6d99b8f-jqlq7\" (UID: \"b4d8b09b-a162-43bd-a91f-dc87e5c9c956\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.948583 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.953301 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.954304 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.960486 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-cmtsz" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.961970 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.968703 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.969416 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r267d\" (UniqueName: \"kubernetes.io/projected/fc892dae-199a-49ca-8ddd-863a6b8426d7-kube-api-access-r267d\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.979655 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.983010 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.984189 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.992971 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-r9hft"] Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.994199 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" Dec 11 08:38:17 crc kubenswrapper[4992]: I1211 08:38:17.998515 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kwgwx" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.001714 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-r9hft"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.006991 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hs4hd" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.007542 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.011696 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.014169 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.018918 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7df\" (UniqueName: \"kubernetes.io/projected/995a7c64-c843-4200-b1cf-9fe6d774f457-kube-api-access-xk7df\") pod \"manila-operator-controller-manager-5b5fd79c9c-2rs5h\" (UID: \"995a7c64-c843-4200-b1cf-9fe6d774f457\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.018991 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ns5j\" (UniqueName: \"kubernetes.io/projected/2b8e6bee-2aae-4689-898a-b298fd5a3d00-kube-api-access-7ns5j\") pod \"mariadb-operator-controller-manager-79c8c4686c-7nw5b\" (UID: \"2b8e6bee-2aae-4689-898a-b298fd5a3d00\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.019022 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzp4x\" (UniqueName: \"kubernetes.io/projected/74f7d667-67f0-459b-a7a0-f46c0e095485-kube-api-access-mzp4x\") pod \"keystone-operator-controller-manager-7765d96ddf-l6gnl\" (UID: \"74f7d667-67f0-459b-a7a0-f46c0e095485\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.019080 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d245l\" (UniqueName: \"kubernetes.io/projected/2a91887f-977b-43dd-b638-0391348bf5d7-kube-api-access-d245l\") pod \"ironic-operator-controller-manager-967d97867-r792s\" (UID: \"2a91887f-977b-43dd-b638-0391348bf5d7\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.019602 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-j8nl2" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.037898 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.044721 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.045954 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.056693 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.059470 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.073417 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.073787 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m5jkl" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.074134 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-njxnq" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.094702 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d245l\" (UniqueName: \"kubernetes.io/projected/2a91887f-977b-43dd-b638-0391348bf5d7-kube-api-access-d245l\") pod \"ironic-operator-controller-manager-967d97867-r792s\" (UID: \"2a91887f-977b-43dd-b638-0391348bf5d7\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.122010 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzp4x\" (UniqueName: \"kubernetes.io/projected/74f7d667-67f0-459b-a7a0-f46c0e095485-kube-api-access-mzp4x\") pod \"keystone-operator-controller-manager-7765d96ddf-l6gnl\" (UID: \"74f7d667-67f0-459b-a7a0-f46c0e095485\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.127715 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.128283 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ns5j\" (UniqueName: \"kubernetes.io/projected/2b8e6bee-2aae-4689-898a-b298fd5a3d00-kube-api-access-7ns5j\") pod \"mariadb-operator-controller-manager-79c8c4686c-7nw5b\" (UID: \"2b8e6bee-2aae-4689-898a-b298fd5a3d00\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.129392 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgx6f\" (UniqueName: \"kubernetes.io/projected/aa6fcfad-b39a-4621-aebe-0b48a4106495-kube-api-access-sgx6f\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-rd8j7\" (UID: \"aa6fcfad-b39a-4621-aebe-0b48a4106495\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.129557 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7df\" (UniqueName: \"kubernetes.io/projected/995a7c64-c843-4200-b1cf-9fe6d774f457-kube-api-access-xk7df\") pod \"manila-operator-controller-manager-5b5fd79c9c-2rs5h\" (UID: \"995a7c64-c843-4200-b1cf-9fe6d774f457\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.129593 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7tp\" (UniqueName: \"kubernetes.io/projected/94ff8875-2a35-47c0-8da4-1fcc4fd0836e-kube-api-access-pv7tp\") pod \"octavia-operator-controller-manager-998648c74-r9hft\" (UID: \"94ff8875-2a35-47c0-8da4-1fcc4fd0836e\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.129659 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqbfg\" (UniqueName: \"kubernetes.io/projected/22b393e2-e34e-4f47-a8f8-136d9a6613f6-kube-api-access-cqbfg\") pod \"nova-operator-controller-manager-697bc559fc-9r2v8\" (UID: \"22b393e2-e34e-4f47-a8f8-136d9a6613f6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.129684 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ngfj\" (UniqueName: \"kubernetes.io/projected/2b843345-399a-41e3-abe0-f7f41682250a-kube-api-access-4ngfj\") pod \"ovn-operator-controller-manager-b6456fdb6-4lp9k\" (UID: \"2b843345-399a-41e3-abe0-f7f41682250a\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.170940 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.187109 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.192064 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.197694 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7df\" (UniqueName: \"kubernetes.io/projected/995a7c64-c843-4200-b1cf-9fe6d774f457-kube-api-access-xk7df\") pod \"manila-operator-controller-manager-5b5fd79c9c-2rs5h\" (UID: \"995a7c64-c843-4200-b1cf-9fe6d774f457\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.215481 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f8vxp" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.251603 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ns5j\" (UniqueName: \"kubernetes.io/projected/2b8e6bee-2aae-4689-898a-b298fd5a3d00-kube-api-access-7ns5j\") pod \"mariadb-operator-controller-manager-79c8c4686c-7nw5b\" (UID: \"2b8e6bee-2aae-4689-898a-b298fd5a3d00\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.263860 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj6mk\" (UniqueName: \"kubernetes.io/projected/d708dd00-6c6a-4dd0-ac04-e0b57a753f1f-kube-api-access-gj6mk\") pod \"placement-operator-controller-manager-78f8948974-8q2xn\" (UID: \"d708dd00-6c6a-4dd0-ac04-e0b57a753f1f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.263979 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfscr\" (UniqueName: \"kubernetes.io/projected/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-kube-api-access-nfscr\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.264126 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.264227 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgx6f\" (UniqueName: \"kubernetes.io/projected/aa6fcfad-b39a-4621-aebe-0b48a4106495-kube-api-access-sgx6f\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-rd8j7\" (UID: \"aa6fcfad-b39a-4621-aebe-0b48a4106495\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.264361 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7tp\" (UniqueName: \"kubernetes.io/projected/94ff8875-2a35-47c0-8da4-1fcc4fd0836e-kube-api-access-pv7tp\") pod \"octavia-operator-controller-manager-998648c74-r9hft\" (UID: \"94ff8875-2a35-47c0-8da4-1fcc4fd0836e\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.264424 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqbfg\" (UniqueName: \"kubernetes.io/projected/22b393e2-e34e-4f47-a8f8-136d9a6613f6-kube-api-access-cqbfg\") pod \"nova-operator-controller-manager-697bc559fc-9r2v8\" (UID: \"22b393e2-e34e-4f47-a8f8-136d9a6613f6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.264471 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ngfj\" (UniqueName: \"kubernetes.io/projected/2b843345-399a-41e3-abe0-f7f41682250a-kube-api-access-4ngfj\") pod \"ovn-operator-controller-manager-b6456fdb6-4lp9k\" (UID: \"2b843345-399a-41e3-abe0-f7f41682250a\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.265696 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.277288 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.303741 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-mc5vl" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.313880 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ngfj\" (UniqueName: \"kubernetes.io/projected/2b843345-399a-41e3-abe0-f7f41682250a-kube-api-access-4ngfj\") pod \"ovn-operator-controller-manager-b6456fdb6-4lp9k\" (UID: \"2b843345-399a-41e3-abe0-f7f41682250a\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.313975 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgx6f\" (UniqueName: \"kubernetes.io/projected/aa6fcfad-b39a-4621-aebe-0b48a4106495-kube-api-access-sgx6f\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-rd8j7\" (UID: \"aa6fcfad-b39a-4621-aebe-0b48a4106495\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.322502 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.334389 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqbfg\" (UniqueName: \"kubernetes.io/projected/22b393e2-e34e-4f47-a8f8-136d9a6613f6-kube-api-access-cqbfg\") pod \"nova-operator-controller-manager-697bc559fc-9r2v8\" (UID: \"22b393e2-e34e-4f47-a8f8-136d9a6613f6\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.340327 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7tp\" (UniqueName: \"kubernetes.io/projected/94ff8875-2a35-47c0-8da4-1fcc4fd0836e-kube-api-access-pv7tp\") pod \"octavia-operator-controller-manager-998648c74-r9hft\" (UID: \"94ff8875-2a35-47c0-8da4-1fcc4fd0836e\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.350432 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.376249 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj6mk\" (UniqueName: \"kubernetes.io/projected/d708dd00-6c6a-4dd0-ac04-e0b57a753f1f-kube-api-access-gj6mk\") pod \"placement-operator-controller-manager-78f8948974-8q2xn\" (UID: \"d708dd00-6c6a-4dd0-ac04-e0b57a753f1f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.376320 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfscr\" (UniqueName: \"kubernetes.io/projected/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-kube-api-access-nfscr\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.376379 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.376412 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6w9r\" (UniqueName: \"kubernetes.io/projected/2e7b36cb-508f-46e0-acd1-6eca36c331b1-kube-api-access-k6w9r\") pod \"telemetry-operator-controller-manager-58d5ff84df-l54jq\" (UID: \"2e7b36cb-508f-46e0-acd1-6eca36c331b1\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.376465 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwwz8\" (UniqueName: \"kubernetes.io/projected/ae97f467-cfd0-46c1-a261-36f09387f3e0-kube-api-access-nwwz8\") pod \"swift-operator-controller-manager-9d58d64bc-d9jmh\" (UID: \"ae97f467-cfd0-46c1-a261-36f09387f3e0\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.376813 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.377755 4992 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.377797 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert podName:24ddc127-1ac3-4dd9-ae14-c133c9ad387b nodeName:}" failed. No retries permitted until 2025-12-11 08:38:18.877781034 +0000 UTC m=+923.137254960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5wrjt" (UID: "24ddc127-1ac3-4dd9-ae14-c133c9ad387b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.394127 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.400698 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.416598 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.417847 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj6mk\" (UniqueName: \"kubernetes.io/projected/d708dd00-6c6a-4dd0-ac04-e0b57a753f1f-kube-api-access-gj6mk\") pod \"placement-operator-controller-manager-78f8948974-8q2xn\" (UID: \"d708dd00-6c6a-4dd0-ac04-e0b57a753f1f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.422852 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.429708 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-g9552"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.431048 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfscr\" (UniqueName: \"kubernetes.io/projected/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-kube-api-access-nfscr\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.432049 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.433509 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-g9552"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.437776 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fbcxd" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.445484 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.449798 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.469759 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.477082 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hv7mm" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.478284 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwwz8\" (UniqueName: \"kubernetes.io/projected/ae97f467-cfd0-46c1-a261-36f09387f3e0-kube-api-access-nwwz8\") pod \"swift-operator-controller-manager-9d58d64bc-d9jmh\" (UID: \"ae97f467-cfd0-46c1-a261-36f09387f3e0\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.478428 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.478585 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6w9r\" (UniqueName: \"kubernetes.io/projected/2e7b36cb-508f-46e0-acd1-6eca36c331b1-kube-api-access-k6w9r\") pod \"telemetry-operator-controller-manager-58d5ff84df-l54jq\" (UID: \"2e7b36cb-508f-46e0-acd1-6eca36c331b1\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.478729 4992 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.478791 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert podName:fc892dae-199a-49ca-8ddd-863a6b8426d7 nodeName:}" failed. No retries permitted until 2025-12-11 08:38:19.478769065 +0000 UTC m=+923.738243051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert") pod "infra-operator-controller-manager-78d48bff9d-hd9fc" (UID: "fc892dae-199a-49ca-8ddd-863a6b8426d7") : secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.500822 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.501211 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.508780 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6w9r\" (UniqueName: \"kubernetes.io/projected/2e7b36cb-508f-46e0-acd1-6eca36c331b1-kube-api-access-k6w9r\") pod \"telemetry-operator-controller-manager-58d5ff84df-l54jq\" (UID: \"2e7b36cb-508f-46e0-acd1-6eca36c331b1\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.531996 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwwz8\" (UniqueName: \"kubernetes.io/projected/ae97f467-cfd0-46c1-a261-36f09387f3e0-kube-api-access-nwwz8\") pod \"swift-operator-controller-manager-9d58d64bc-d9jmh\" (UID: \"ae97f467-cfd0-46c1-a261-36f09387f3e0\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.556070 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.577415 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.578563 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.580525 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsxzx\" (UniqueName: \"kubernetes.io/projected/d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a-kube-api-access-fsxzx\") pod \"test-operator-controller-manager-5854674fcc-g9552\" (UID: \"d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.580675 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk4cf\" (UniqueName: \"kubernetes.io/projected/3635faed-4894-4eb8-94f7-33b055b860c4-kube-api-access-sk4cf\") pod \"watcher-operator-controller-manager-75944c9b7-rph2p\" (UID: \"3635faed-4894-4eb8-94f7-33b055b860c4\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.583325 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.583548 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.583685 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fl284" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.593750 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.610359 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.651202 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.654972 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.657653 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-l6t5r" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.660974 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.672285 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.683754 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.683891 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.683935 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4cf\" (UniqueName: \"kubernetes.io/projected/3635faed-4894-4eb8-94f7-33b055b860c4-kube-api-access-sk4cf\") pod \"watcher-operator-controller-manager-75944c9b7-rph2p\" (UID: \"3635faed-4894-4eb8-94f7-33b055b860c4\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.684091 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsxzx\" (UniqueName: \"kubernetes.io/projected/d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a-kube-api-access-fsxzx\") pod \"test-operator-controller-manager-5854674fcc-g9552\" (UID: \"d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.684165 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mq5\" (UniqueName: \"kubernetes.io/projected/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-kube-api-access-b4mq5\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.703770 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsxzx\" (UniqueName: \"kubernetes.io/projected/d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a-kube-api-access-fsxzx\") pod \"test-operator-controller-manager-5854674fcc-g9552\" (UID: \"d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.704917 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk4cf\" (UniqueName: \"kubernetes.io/projected/3635faed-4894-4eb8-94f7-33b055b860c4-kube-api-access-sk4cf\") pod \"watcher-operator-controller-manager-75944c9b7-rph2p\" (UID: \"3635faed-4894-4eb8-94f7-33b055b860c4\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" Dec 11 08:38:18 crc kubenswrapper[4992]: W1211 08:38:18.729884 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda84e6e65_9f83_405a_a478_a53e125d5845.slice/crio-3a70db1ea6cd10e78fb943bab8931832fe63c0fcc2c6deff5093ff517f07e183 WatchSource:0}: Error finding container 3a70db1ea6cd10e78fb943bab8931832fe63c0fcc2c6deff5093ff517f07e183: Status 404 returned error can't find the container with id 3a70db1ea6cd10e78fb943bab8931832fe63c0fcc2c6deff5093ff517f07e183 Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.766964 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.788199 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cswbn\" (UniqueName: \"kubernetes.io/projected/5c6deb1d-64a1-4f75-baaf-3ce6c908b850-kube-api-access-cswbn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kxgst\" (UID: \"5c6deb1d-64a1-4f75-baaf-3ce6c908b850\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.788313 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4mq5\" (UniqueName: \"kubernetes.io/projected/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-kube-api-access-b4mq5\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.788378 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.788489 4992 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.788516 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.788553 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:19.288532808 +0000 UTC m=+923.548006804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "webhook-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.788806 4992 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.788861 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:19.288841085 +0000 UTC m=+923.548315011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "metrics-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.798978 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.832213 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.839798 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4mq5\" (UniqueName: \"kubernetes.io/projected/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-kube-api-access-b4mq5\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.842918 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" event={"ID":"a84e6e65-9f83-405a-a478-a53e125d5845","Type":"ContainerStarted","Data":"3a70db1ea6cd10e78fb943bab8931832fe63c0fcc2c6deff5093ff517f07e183"} Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.853162 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.892430 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cswbn\" (UniqueName: \"kubernetes.io/projected/5c6deb1d-64a1-4f75-baaf-3ce6c908b850-kube-api-access-cswbn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kxgst\" (UID: \"5c6deb1d-64a1-4f75-baaf-3ce6c908b850\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.892529 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.892788 4992 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: E1211 08:38:18.892848 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert podName:24ddc127-1ac3-4dd9-ae14-c133c9ad387b nodeName:}" failed. No retries permitted until 2025-12-11 08:38:19.892830141 +0000 UTC m=+924.152304067 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5wrjt" (UID: "24ddc127-1ac3-4dd9-ae14-c133c9ad387b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.911118 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95"] Dec 11 08:38:18 crc kubenswrapper[4992]: I1211 08:38:18.927475 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cswbn\" (UniqueName: \"kubernetes.io/projected/5c6deb1d-64a1-4f75-baaf-3ce6c908b850-kube-api-access-cswbn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-kxgst\" (UID: \"5c6deb1d-64a1-4f75-baaf-3ce6c908b850\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.139060 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.272325 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.289688 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.295674 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx"] Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.295954 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07859dd8_8995_4214_8ee9_6648fa5a292e.slice/crio-935ca43ade978f24273ec43075fef3a679e12cc78d2ac0bead089e65020be689 WatchSource:0}: Error finding container 935ca43ade978f24273ec43075fef3a679e12cc78d2ac0bead089e65020be689: Status 404 returned error can't find the container with id 935ca43ade978f24273ec43075fef3a679e12cc78d2ac0bead089e65020be689 Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.301784 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.301840 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.301970 4992 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.302015 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:20.302000715 +0000 UTC m=+924.561474641 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "metrics-server-cert" not found Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.302162 4992 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.302215 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:20.302200981 +0000 UTC m=+924.561674907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "webhook-server-cert" not found Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.305883 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode501b125_ca5e_41f0_88c1_a9fda63de236.slice/crio-055486516ace2495a8d03112722c42c63ec9e7d774ba742b88f04ed46cc9b395 WatchSource:0}: Error finding container 055486516ace2495a8d03112722c42c63ec9e7d774ba742b88f04ed46cc9b395: Status 404 returned error can't find the container with id 055486516ace2495a8d03112722c42c63ec9e7d774ba742b88f04ed46cc9b395 Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.330829 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4d8b09b_a162_43bd_a91f_dc87e5c9c956.slice/crio-2eae8dfae065aa47a6fbd08590c2da0725321ac1326e7c8a512f71af6ae221c5 WatchSource:0}: Error finding container 2eae8dfae065aa47a6fbd08590c2da0725321ac1326e7c8a512f71af6ae221c5: Status 404 returned error can't find the container with id 2eae8dfae065aa47a6fbd08590c2da0725321ac1326e7c8a512f71af6ae221c5 Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.349590 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.509780 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.509924 4992 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.510005 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert podName:fc892dae-199a-49ca-8ddd-863a6b8426d7 nodeName:}" failed. No retries permitted until 2025-12-11 08:38:21.509980976 +0000 UTC m=+925.769454902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert") pod "infra-operator-controller-manager-78d48bff9d-hd9fc" (UID: "fc892dae-199a-49ca-8ddd-863a6b8426d7") : secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.663675 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k"] Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.680109 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa6fcfad_b39a_4621_aebe_0b48a4106495.slice/crio-79affd499467226bfe47279e1c315d85b8240f46735749239d1713e395081d58 WatchSource:0}: Error finding container 79affd499467226bfe47279e1c315d85b8240f46735749239d1713e395081d58: Status 404 returned error can't find the container with id 79affd499467226bfe47279e1c315d85b8240f46735749239d1713e395081d58 Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.685673 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94ff8875_2a35_47c0_8da4_1fcc4fd0836e.slice/crio-888840071f3903de5a1bc46e564fd3f4f87f577b1355bb894646a84773e435f7 WatchSource:0}: Error finding container 888840071f3903de5a1bc46e564fd3f4f87f577b1355bb894646a84773e435f7: Status 404 returned error can't find the container with id 888840071f3903de5a1bc46e564fd3f4f87f577b1355bb894646a84773e435f7 Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.688773 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.701852 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.714731 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-r792s"] Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.721792 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22b393e2_e34e_4f47_a8f8_136d9a6613f6.slice/crio-34a63accd80d0007e8a5143faa135536fa89c9116ca3939b09ea13eb766f2947 WatchSource:0}: Error finding container 34a63accd80d0007e8a5143faa135536fa89c9116ca3939b09ea13eb766f2947: Status 404 returned error can't find the container with id 34a63accd80d0007e8a5143faa135536fa89c9116ca3939b09ea13eb766f2947 Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.729186 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.736837 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h"] Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.738415 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b8e6bee_2aae_4689_898a_b298fd5a3d00.slice/crio-c8f3b91af3453545d21c6be97142f150395ff4298e4d3aac758212f92e075879 WatchSource:0}: Error finding container c8f3b91af3453545d21c6be97142f150395ff4298e4d3aac758212f92e075879: Status 404 returned error can't find the container with id c8f3b91af3453545d21c6be97142f150395ff4298e4d3aac758212f92e075879 Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.740710 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2cb37d7_1dea_4b2a_bbb4_6a4b2dfaaa0a.slice/crio-4e23a1847ef1a094bad7103614ccffbf9bf8ce56bfacae0e749198f7ce8f9820 WatchSource:0}: Error finding container 4e23a1847ef1a094bad7103614ccffbf9bf8ce56bfacae0e749198f7ce8f9820: Status 404 returned error can't find the container with id 4e23a1847ef1a094bad7103614ccffbf9bf8ce56bfacae0e749198f7ce8f9820 Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.742107 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ns5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-7nw5b_openstack-operators(2b8e6bee-2aae-4689-898a-b298fd5a3d00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.743396 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3635faed_4894_4eb8_94f7_33b055b860c4.slice/crio-0d8668a8ef14b0b38d93a2f8411af58d8acff361f48cdf865ba2b9b5eead1de3 WatchSource:0}: Error finding container 0d8668a8ef14b0b38d93a2f8411af58d8acff361f48cdf865ba2b9b5eead1de3: Status 404 returned error can't find the container with id 0d8668a8ef14b0b38d93a2f8411af58d8acff361f48cdf865ba2b9b5eead1de3 Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.743501 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8"] Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.744355 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ns5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-7nw5b_openstack-operators(2b8e6bee-2aae-4689-898a-b298fd5a3d00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.744444 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k6w9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-l54jq_openstack-operators(2e7b36cb-508f-46e0-acd1-6eca36c331b1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.745303 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fsxzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-g9552_openstack-operators(d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.745525 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" podUID="2b8e6bee-2aae-4689-898a-b298fd5a3d00" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.747973 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k6w9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-l54jq_openstack-operators(2e7b36cb-508f-46e0-acd1-6eca36c331b1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.749319 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fsxzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-g9552_openstack-operators(d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.749424 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" podUID="2e7b36cb-508f-46e0-acd1-6eca36c331b1" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.750240 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sk4cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-rph2p_openstack-operators(3635faed-4894-4eb8-94f7-33b055b860c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.750515 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" podUID="d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.752450 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-r9hft"] Dec 11 08:38:19 crc kubenswrapper[4992]: W1211 08:38:19.753621 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c6deb1d_64a1_4f75_baaf_3ce6c908b850.slice/crio-d0a357a51506cd2b7ce5bc059a3c78fad0654d2066a4641e1905222227d06782 WatchSource:0}: Error finding container d0a357a51506cd2b7ce5bc059a3c78fad0654d2066a4641e1905222227d06782: Status 404 returned error can't find the container with id d0a357a51506cd2b7ce5bc059a3c78fad0654d2066a4641e1905222227d06782 Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.755271 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sk4cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-rph2p_openstack-operators(3635faed-4894-4eb8-94f7-33b055b860c4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.756480 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" podUID="3635faed-4894-4eb8-94f7-33b055b860c4" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.758288 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwwz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-d9jmh_openstack-operators(ae97f467-cfd0-46c1-a261-36f09387f3e0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.759276 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cswbn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-kxgst_openstack-operators(5c6deb1d-64a1-4f75-baaf-3ce6c908b850): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.759548 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b"] Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.760318 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nwwz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-d9jmh_openstack-operators(ae97f467-cfd0-46c1-a261-36f09387f3e0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.760387 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" podUID="5c6deb1d-64a1-4f75-baaf-3ce6c908b850" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.761601 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" podUID="ae97f467-cfd0-46c1-a261-36f09387f3e0" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.764284 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-g9552"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.768876 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.773732 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.778752 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.782688 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst"] Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.850034 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" event={"ID":"b4d8b09b-a162-43bd-a91f-dc87e5c9c956","Type":"ContainerStarted","Data":"2eae8dfae065aa47a6fbd08590c2da0725321ac1326e7c8a512f71af6ae221c5"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.851334 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" event={"ID":"2b843345-399a-41e3-abe0-f7f41682250a","Type":"ContainerStarted","Data":"efe7075d92f8f1a24439ffc7b342b27ebdd62db16e6c4250ba057e5abb691a42"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.852273 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" event={"ID":"aa6fcfad-b39a-4621-aebe-0b48a4106495","Type":"ContainerStarted","Data":"79affd499467226bfe47279e1c315d85b8240f46735749239d1713e395081d58"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.853422 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" event={"ID":"2b8e6bee-2aae-4689-898a-b298fd5a3d00","Type":"ContainerStarted","Data":"c8f3b91af3453545d21c6be97142f150395ff4298e4d3aac758212f92e075879"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.855281 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" event={"ID":"74f7d667-67f0-459b-a7a0-f46c0e095485","Type":"ContainerStarted","Data":"0430e079dc46ca6e66eaeed6336d68efba14defdd6e5540de778afa25a7fd30a"} Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.855469 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" podUID="2b8e6bee-2aae-4689-898a-b298fd5a3d00" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.856312 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" event={"ID":"d708dd00-6c6a-4dd0-ac04-e0b57a753f1f","Type":"ContainerStarted","Data":"3ab4bbab55f36d0c91751b3f83e2ffc5a50e3e30aec0aa91a841fdea74b34d58"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.860220 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" event={"ID":"3635faed-4894-4eb8-94f7-33b055b860c4","Type":"ContainerStarted","Data":"0d8668a8ef14b0b38d93a2f8411af58d8acff361f48cdf865ba2b9b5eead1de3"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.861242 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" event={"ID":"ae97f467-cfd0-46c1-a261-36f09387f3e0","Type":"ContainerStarted","Data":"24231a2af05ed669a95513df3b36d2e6a1272f5fb1ae7091ef7ab24c86f9f879"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.863107 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" event={"ID":"2a91887f-977b-43dd-b638-0391348bf5d7","Type":"ContainerStarted","Data":"186b71b6767f5583f589141eb4b6f37a1aa73c681fa654fdf8f0a4d70fd2d4a2"} Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.863330 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" podUID="ae97f467-cfd0-46c1-a261-36f09387f3e0" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.863651 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" podUID="3635faed-4894-4eb8-94f7-33b055b860c4" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.864059 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" event={"ID":"22b393e2-e34e-4f47-a8f8-136d9a6613f6","Type":"ContainerStarted","Data":"34a63accd80d0007e8a5143faa135536fa89c9116ca3939b09ea13eb766f2947"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.864982 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" event={"ID":"e501b125-ca5e-41f0-88c1-a9fda63de236","Type":"ContainerStarted","Data":"055486516ace2495a8d03112722c42c63ec9e7d774ba742b88f04ed46cc9b395"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.866444 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" event={"ID":"2e7b36cb-508f-46e0-acd1-6eca36c331b1","Type":"ContainerStarted","Data":"55c3b4a75200d06c40294e00696076f17ffc09b290fe05a92980b64261be8be2"} Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.868519 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" podUID="2e7b36cb-508f-46e0-acd1-6eca36c331b1" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.872058 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" event={"ID":"d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a","Type":"ContainerStarted","Data":"4e23a1847ef1a094bad7103614ccffbf9bf8ce56bfacae0e749198f7ce8f9820"} Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.878815 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" podUID="d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.881740 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" event={"ID":"42106600-d00d-477a-aaec-102ba03cb5c6","Type":"ContainerStarted","Data":"150b126959696e0a7352f8e7db6c556fa0c69f480090146e8719146b92565e1a"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.883459 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" event={"ID":"94ff8875-2a35-47c0-8da4-1fcc4fd0836e","Type":"ContainerStarted","Data":"888840071f3903de5a1bc46e564fd3f4f87f577b1355bb894646a84773e435f7"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.887008 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" event={"ID":"a39ad598-19ba-42cb-9f35-538b68de7b04","Type":"ContainerStarted","Data":"1e2b7eb046458d376b6f5af66533b47d42ab401f4ad59da7bf289662da4d28aa"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.888134 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" event={"ID":"5c6deb1d-64a1-4f75-baaf-3ce6c908b850","Type":"ContainerStarted","Data":"d0a357a51506cd2b7ce5bc059a3c78fad0654d2066a4641e1905222227d06782"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.889029 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" event={"ID":"07859dd8-8995-4214-8ee9-6648fa5a292e","Type":"ContainerStarted","Data":"935ca43ade978f24273ec43075fef3a679e12cc78d2ac0bead089e65020be689"} Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.890001 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" event={"ID":"995a7c64-c843-4200-b1cf-9fe6d774f457","Type":"ContainerStarted","Data":"1ed333b70591c2f7730b906342932f0a2bf1780b08b761efb01d5984dbe00868"} Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.890477 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" podUID="5c6deb1d-64a1-4f75-baaf-3ce6c908b850" Dec 11 08:38:19 crc kubenswrapper[4992]: I1211 08:38:19.914360 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.914560 4992 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:19 crc kubenswrapper[4992]: E1211 08:38:19.914648 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert podName:24ddc127-1ac3-4dd9-ae14-c133c9ad387b nodeName:}" failed. No retries permitted until 2025-12-11 08:38:21.914616429 +0000 UTC m=+926.174090355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5wrjt" (UID: "24ddc127-1ac3-4dd9-ae14-c133c9ad387b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:20 crc kubenswrapper[4992]: I1211 08:38:20.320771 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:20 crc kubenswrapper[4992]: I1211 08:38:20.320838 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.321016 4992 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.321116 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:22.321089597 +0000 UTC m=+926.580563543 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "webhook-server-cert" not found Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.321167 4992 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.321304 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:22.321269591 +0000 UTC m=+926.580743517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "metrics-server-cert" not found Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.900884 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" podUID="5c6deb1d-64a1-4f75-baaf-3ce6c908b850" Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.901510 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" podUID="2e7b36cb-508f-46e0-acd1-6eca36c331b1" Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.902195 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" podUID="d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a" Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.903229 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" podUID="3635faed-4894-4eb8-94f7-33b055b860c4" Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.903652 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" podUID="ae97f467-cfd0-46c1-a261-36f09387f3e0" Dec 11 08:38:20 crc kubenswrapper[4992]: E1211 08:38:20.904028 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" podUID="2b8e6bee-2aae-4689-898a-b298fd5a3d00" Dec 11 08:38:21 crc kubenswrapper[4992]: I1211 08:38:21.543128 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:21 crc kubenswrapper[4992]: E1211 08:38:21.543585 4992 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:21 crc kubenswrapper[4992]: E1211 08:38:21.543676 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert podName:fc892dae-199a-49ca-8ddd-863a6b8426d7 nodeName:}" failed. No retries permitted until 2025-12-11 08:38:25.543657769 +0000 UTC m=+929.803131695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert") pod "infra-operator-controller-manager-78d48bff9d-hd9fc" (UID: "fc892dae-199a-49ca-8ddd-863a6b8426d7") : secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:21 crc kubenswrapper[4992]: I1211 08:38:21.949107 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:21 crc kubenswrapper[4992]: E1211 08:38:21.949268 4992 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:21 crc kubenswrapper[4992]: E1211 08:38:21.949319 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert podName:24ddc127-1ac3-4dd9-ae14-c133c9ad387b nodeName:}" failed. No retries permitted until 2025-12-11 08:38:25.949304247 +0000 UTC m=+930.208778173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5wrjt" (UID: "24ddc127-1ac3-4dd9-ae14-c133c9ad387b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:22 crc kubenswrapper[4992]: I1211 08:38:22.355473 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:22 crc kubenswrapper[4992]: I1211 08:38:22.355856 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:22 crc kubenswrapper[4992]: E1211 08:38:22.355688 4992 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 08:38:22 crc kubenswrapper[4992]: E1211 08:38:22.355965 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:26.355947189 +0000 UTC m=+930.615421115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "webhook-server-cert" not found Dec 11 08:38:22 crc kubenswrapper[4992]: E1211 08:38:22.356099 4992 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 08:38:22 crc kubenswrapper[4992]: E1211 08:38:22.356180 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:26.356162915 +0000 UTC m=+930.615636891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "metrics-server-cert" not found Dec 11 08:38:25 crc kubenswrapper[4992]: I1211 08:38:25.603718 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:25 crc kubenswrapper[4992]: E1211 08:38:25.603931 4992 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:25 crc kubenswrapper[4992]: E1211 08:38:25.604013 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert podName:fc892dae-199a-49ca-8ddd-863a6b8426d7 nodeName:}" failed. No retries permitted until 2025-12-11 08:38:33.603991801 +0000 UTC m=+937.863465807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert") pod "infra-operator-controller-manager-78d48bff9d-hd9fc" (UID: "fc892dae-199a-49ca-8ddd-863a6b8426d7") : secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:26 crc kubenswrapper[4992]: I1211 08:38:26.011495 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:26 crc kubenswrapper[4992]: E1211 08:38:26.011711 4992 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:26 crc kubenswrapper[4992]: E1211 08:38:26.012085 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert podName:24ddc127-1ac3-4dd9-ae14-c133c9ad387b nodeName:}" failed. No retries permitted until 2025-12-11 08:38:34.012059429 +0000 UTC m=+938.271533395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f5wrjt" (UID: "24ddc127-1ac3-4dd9-ae14-c133c9ad387b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 08:38:26 crc kubenswrapper[4992]: I1211 08:38:26.419379 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:26 crc kubenswrapper[4992]: I1211 08:38:26.419593 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:26 crc kubenswrapper[4992]: E1211 08:38:26.419756 4992 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 08:38:26 crc kubenswrapper[4992]: E1211 08:38:26.419828 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:34.419810558 +0000 UTC m=+938.679284484 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "webhook-server-cert" not found Dec 11 08:38:26 crc kubenswrapper[4992]: E1211 08:38:26.419884 4992 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 08:38:26 crc kubenswrapper[4992]: E1211 08:38:26.419962 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:34.419942891 +0000 UTC m=+938.679416867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "metrics-server-cert" not found Dec 11 08:38:33 crc kubenswrapper[4992]: E1211 08:38:33.263383 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3" Dec 11 08:38:33 crc kubenswrapper[4992]: E1211 08:38:33.264085 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5clsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-55h47_openstack-operators(07859dd8-8995-4214-8ee9-6648fa5a292e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:38:33 crc kubenswrapper[4992]: I1211 08:38:33.628085 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:33 crc kubenswrapper[4992]: E1211 08:38:33.628353 4992 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:33 crc kubenswrapper[4992]: E1211 08:38:33.628448 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert podName:fc892dae-199a-49ca-8ddd-863a6b8426d7 nodeName:}" failed. No retries permitted until 2025-12-11 08:38:49.628429461 +0000 UTC m=+953.887903387 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert") pod "infra-operator-controller-manager-78d48bff9d-hd9fc" (UID: "fc892dae-199a-49ca-8ddd-863a6b8426d7") : secret "infra-operator-webhook-server-cert" not found Dec 11 08:38:34 crc kubenswrapper[4992]: I1211 08:38:34.033570 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:34 crc kubenswrapper[4992]: I1211 08:38:34.039240 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/24ddc127-1ac3-4dd9-ae14-c133c9ad387b-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f5wrjt\" (UID: \"24ddc127-1ac3-4dd9-ae14-c133c9ad387b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:34 crc kubenswrapper[4992]: I1211 08:38:34.317165 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:38:34 crc kubenswrapper[4992]: I1211 08:38:34.439276 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:34 crc kubenswrapper[4992]: I1211 08:38:34.439328 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:34 crc kubenswrapper[4992]: E1211 08:38:34.439466 4992 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 08:38:34 crc kubenswrapper[4992]: E1211 08:38:34.439528 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs podName:4c6d188e-81e4-4ba9-a555-5dbda4f39d1d nodeName:}" failed. No retries permitted until 2025-12-11 08:38:50.439510171 +0000 UTC m=+954.698984107 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs") pod "openstack-operator-controller-manager-5dfd9f965d-t689z" (UID: "4c6d188e-81e4-4ba9-a555-5dbda4f39d1d") : secret "metrics-server-cert" not found Dec 11 08:38:34 crc kubenswrapper[4992]: I1211 08:38:34.442742 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-webhook-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:35 crc kubenswrapper[4992]: I1211 08:38:35.378614 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:38:35 crc kubenswrapper[4992]: I1211 08:38:35.378713 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:38:45 crc kubenswrapper[4992]: E1211 08:38:45.060542 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 11 08:38:45 crc kubenswrapper[4992]: E1211 08:38:45.061299 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sgx6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-rd8j7_openstack-operators(aa6fcfad-b39a-4621-aebe-0b48a4106495): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:38:45 crc kubenswrapper[4992]: E1211 08:38:45.920182 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 11 08:38:45 crc kubenswrapper[4992]: E1211 08:38:45.920726 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gj6mk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-8q2xn_openstack-operators(d708dd00-6c6a-4dd0-ac04-e0b57a753f1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:38:45 crc kubenswrapper[4992]: I1211 08:38:45.922829 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 08:38:49 crc kubenswrapper[4992]: E1211 08:38:49.615560 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 11 08:38:49 crc kubenswrapper[4992]: E1211 08:38:49.616729 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4ngfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-4lp9k_openstack-operators(2b843345-399a-41e3-abe0-f7f41682250a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:38:49 crc kubenswrapper[4992]: I1211 08:38:49.676879 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:49 crc kubenswrapper[4992]: I1211 08:38:49.689449 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fc892dae-199a-49ca-8ddd-863a6b8426d7-cert\") pod \"infra-operator-controller-manager-78d48bff9d-hd9fc\" (UID: \"fc892dae-199a-49ca-8ddd-863a6b8426d7\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:49 crc kubenswrapper[4992]: I1211 08:38:49.892281 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:38:50 crc kubenswrapper[4992]: I1211 08:38:50.506822 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:50 crc kubenswrapper[4992]: I1211 08:38:50.513659 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c6d188e-81e4-4ba9-a555-5dbda4f39d1d-metrics-certs\") pod \"openstack-operator-controller-manager-5dfd9f965d-t689z\" (UID: \"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d\") " pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:50 crc kubenswrapper[4992]: I1211 08:38:50.807782 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:50 crc kubenswrapper[4992]: E1211 08:38:50.959130 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 11 08:38:50 crc kubenswrapper[4992]: E1211 08:38:50.959321 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cqbfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-9r2v8_openstack-operators(22b393e2-e34e-4f47-a8f8-136d9a6613f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:38:53 crc kubenswrapper[4992]: E1211 08:38:53.330883 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 11 08:38:53 crc kubenswrapper[4992]: E1211 08:38:53.331355 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mzp4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-l6gnl_openstack-operators(74f7d667-67f0-459b-a7a0-f46c0e095485): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:38:55 crc kubenswrapper[4992]: I1211 08:38:55.395595 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc"] Dec 11 08:38:55 crc kubenswrapper[4992]: I1211 08:38:55.496981 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt"] Dec 11 08:38:55 crc kubenswrapper[4992]: I1211 08:38:55.711663 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z"] Dec 11 08:38:55 crc kubenswrapper[4992]: W1211 08:38:55.829952 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc892dae_199a_49ca_8ddd_863a6b8426d7.slice/crio-26133c570e3f2824f23a13ba775f2ba5b3d7a4e8a7c4e982b95568f5bf619897 WatchSource:0}: Error finding container 26133c570e3f2824f23a13ba775f2ba5b3d7a4e8a7c4e982b95568f5bf619897: Status 404 returned error can't find the container with id 26133c570e3f2824f23a13ba775f2ba5b3d7a4e8a7c4e982b95568f5bf619897 Dec 11 08:38:55 crc kubenswrapper[4992]: W1211 08:38:55.837593 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24ddc127_1ac3_4dd9_ae14_c133c9ad387b.slice/crio-59fe37de4ec67c8633ce149a432bbd514c1b74a82631f981c424937f8e0a2120 WatchSource:0}: Error finding container 59fe37de4ec67c8633ce149a432bbd514c1b74a82631f981c424937f8e0a2120: Status 404 returned error can't find the container with id 59fe37de4ec67c8633ce149a432bbd514c1b74a82631f981c424937f8e0a2120 Dec 11 08:38:55 crc kubenswrapper[4992]: W1211 08:38:55.839461 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c6d188e_81e4_4ba9_a555_5dbda4f39d1d.slice/crio-557aa8380825ca9c6ec746dc4e200a978561d819e2dde5755a39a801193da82c WatchSource:0}: Error finding container 557aa8380825ca9c6ec746dc4e200a978561d819e2dde5755a39a801193da82c: Status 404 returned error can't find the container with id 557aa8380825ca9c6ec746dc4e200a978561d819e2dde5755a39a801193da82c Dec 11 08:38:56 crc kubenswrapper[4992]: I1211 08:38:56.148136 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" event={"ID":"995a7c64-c843-4200-b1cf-9fe6d774f457","Type":"ContainerStarted","Data":"075ac13738657f193e8d183f2f38bba3b58372278fda9eeb724fb5a39530358c"} Dec 11 08:38:56 crc kubenswrapper[4992]: I1211 08:38:56.183691 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" event={"ID":"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d","Type":"ContainerStarted","Data":"557aa8380825ca9c6ec746dc4e200a978561d819e2dde5755a39a801193da82c"} Dec 11 08:38:56 crc kubenswrapper[4992]: I1211 08:38:56.187610 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" event={"ID":"e501b125-ca5e-41f0-88c1-a9fda63de236","Type":"ContainerStarted","Data":"fcafcb2ebfd9abf304c282ee5a06f48c5390fc528cbb1cf9e42c752cf1b0ac16"} Dec 11 08:38:56 crc kubenswrapper[4992]: I1211 08:38:56.190820 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" event={"ID":"fc892dae-199a-49ca-8ddd-863a6b8426d7","Type":"ContainerStarted","Data":"26133c570e3f2824f23a13ba775f2ba5b3d7a4e8a7c4e982b95568f5bf619897"} Dec 11 08:38:56 crc kubenswrapper[4992]: I1211 08:38:56.212793 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" event={"ID":"a39ad598-19ba-42cb-9f35-538b68de7b04","Type":"ContainerStarted","Data":"86def0bcc71f07abf28bee0d9be61c99f572a0ff2de2179dca11f918df98fcd0"} Dec 11 08:38:56 crc kubenswrapper[4992]: I1211 08:38:56.214312 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" event={"ID":"a84e6e65-9f83-405a-a478-a53e125d5845","Type":"ContainerStarted","Data":"3cc2c974b69e79f987148c0a7ae34a9c0819ed001ac773d7703ff9e828a69395"} Dec 11 08:38:56 crc kubenswrapper[4992]: I1211 08:38:56.215895 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" event={"ID":"24ddc127-1ac3-4dd9-ae14-c133c9ad387b","Type":"ContainerStarted","Data":"59fe37de4ec67c8633ce149a432bbd514c1b74a82631f981c424937f8e0a2120"} Dec 11 08:38:56 crc kubenswrapper[4992]: I1211 08:38:56.224917 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" event={"ID":"94ff8875-2a35-47c0-8da4-1fcc4fd0836e","Type":"ContainerStarted","Data":"b4b454da4ab0796195a3dc09dee7f436061267d48fbd3c700ec0a417463e1778"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.254418 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" event={"ID":"ae97f467-cfd0-46c1-a261-36f09387f3e0","Type":"ContainerStarted","Data":"8c8bce0a6c6ab7aee7e30665b73b3b66cd639f6a1396063f69dc49b08e172a5e"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.260347 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" event={"ID":"2b8e6bee-2aae-4689-898a-b298fd5a3d00","Type":"ContainerStarted","Data":"d500293384d014287fd5d819f49cb25d9371c05b3be161246538eb3a54334c3c"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.292973 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" event={"ID":"2a91887f-977b-43dd-b638-0391348bf5d7","Type":"ContainerStarted","Data":"a9913c25a0e7494d2926428e0e50dfe4997ce483ab1f5fe7a704010364f92595"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.305053 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" event={"ID":"4c6d188e-81e4-4ba9-a555-5dbda4f39d1d","Type":"ContainerStarted","Data":"90627c9e29a2f6ce9712269e25db6436c43806e84d1838da794e8bc73396ef7c"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.306146 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.312689 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" event={"ID":"d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a","Type":"ContainerStarted","Data":"b64e3491a5ab53564465988dd8313d0484d88a6928b80e7916c398ec71f5637c"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.331889 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" event={"ID":"3635faed-4894-4eb8-94f7-33b055b860c4","Type":"ContainerStarted","Data":"33c078b5d5a0b523d0387c418ed5991ce1caed08d750ed994e4686a512272c48"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.335030 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" event={"ID":"b4d8b09b-a162-43bd-a91f-dc87e5c9c956","Type":"ContainerStarted","Data":"a5b7acfc3ea1767af86da8d42de9d10f8dd1cfd38ba45a3b95bba9c385713180"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.339290 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" event={"ID":"2e7b36cb-508f-46e0-acd1-6eca36c331b1","Type":"ContainerStarted","Data":"194a893755021268ab894e98924c0e23387858a225dd3b9b0e8ab0994ab32301"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.347738 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" event={"ID":"42106600-d00d-477a-aaec-102ba03cb5c6","Type":"ContainerStarted","Data":"1f4c5a5ce2d514c522027876488d9e9c1dce63a5f5aebbc593115ca753b8eba4"} Dec 11 08:38:57 crc kubenswrapper[4992]: I1211 08:38:57.356916 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" podStartSLOduration=39.356896153 podStartE2EDuration="39.356896153s" podCreationTimestamp="2025-12-11 08:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:38:57.352877094 +0000 UTC m=+961.612351020" watchObservedRunningTime="2025-12-11 08:38:57.356896153 +0000 UTC m=+961.616370079" Dec 11 08:38:58 crc kubenswrapper[4992]: I1211 08:38:58.358542 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" event={"ID":"5c6deb1d-64a1-4f75-baaf-3ce6c908b850","Type":"ContainerStarted","Data":"34196f1d907c37a3f15a2d219c4cee8b69d78705b856d545f88c0602768c57ff"} Dec 11 08:38:58 crc kubenswrapper[4992]: I1211 08:38:58.390923 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-kxgst" podStartSLOduration=4.820766226 podStartE2EDuration="40.390900513s" podCreationTimestamp="2025-12-11 08:38:18 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.759136643 +0000 UTC m=+924.018610579" lastFinishedPulling="2025-12-11 08:38:55.32927094 +0000 UTC m=+959.588744866" observedRunningTime="2025-12-11 08:38:58.384310671 +0000 UTC m=+962.643784617" watchObservedRunningTime="2025-12-11 08:38:58.390900513 +0000 UTC m=+962.650374439" Dec 11 08:39:00 crc kubenswrapper[4992]: E1211 08:39:00.756863 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" podUID="2b843345-399a-41e3-abe0-f7f41682250a" Dec 11 08:39:00 crc kubenswrapper[4992]: E1211 08:39:00.760027 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" podUID="22b393e2-e34e-4f47-a8f8-136d9a6613f6" Dec 11 08:39:01 crc kubenswrapper[4992]: E1211 08:39:01.102244 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" podUID="d708dd00-6c6a-4dd0-ac04-e0b57a753f1f" Dec 11 08:39:01 crc kubenswrapper[4992]: E1211 08:39:01.107052 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" podUID="aa6fcfad-b39a-4621-aebe-0b48a4106495" Dec 11 08:39:01 crc kubenswrapper[4992]: E1211 08:39:01.187223 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" podUID="74f7d667-67f0-459b-a7a0-f46c0e095485" Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.395886 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" event={"ID":"aa6fcfad-b39a-4621-aebe-0b48a4106495","Type":"ContainerStarted","Data":"112d373b9fdc39d8279b4fa29b56c1a6f97cf797ef54d1a7aaf1a949d05b435d"} Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.400111 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" event={"ID":"22b393e2-e34e-4f47-a8f8-136d9a6613f6","Type":"ContainerStarted","Data":"5afc73438d9a22138ff0d63af96fc3fe9eb9ad7beb57e51ff01169088ac19c61"} Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.408330 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" event={"ID":"74f7d667-67f0-459b-a7a0-f46c0e095485","Type":"ContainerStarted","Data":"f54af48f4e1859be46a3d51c37cf919e415244b6294b478ce38ffda54d3cf413"} Dec 11 08:39:01 crc kubenswrapper[4992]: E1211 08:39:01.411857 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" podUID="74f7d667-67f0-459b-a7a0-f46c0e095485" Dec 11 08:39:01 crc kubenswrapper[4992]: E1211 08:39:01.428083 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" podUID="07859dd8-8995-4214-8ee9-6648fa5a292e" Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.461070 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" event={"ID":"e501b125-ca5e-41f0-88c1-a9fda63de236","Type":"ContainerStarted","Data":"002d4e123c28052e031806917eb4f4e663db08e00ae6333464fc5726ea7aa783"} Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.461741 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.464187 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.473582 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" event={"ID":"fc892dae-199a-49ca-8ddd-863a6b8426d7","Type":"ContainerStarted","Data":"163ea4e3d2a7e30f3728e4dfaf380bfd7668b73852057373f6d24485c105c565"} Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.483406 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" event={"ID":"a84e6e65-9f83-405a-a478-a53e125d5845","Type":"ContainerStarted","Data":"d52fd21394a5303c772019c2a20b2974d8b7698071971e7196e51192f7e1fb62"} Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.484210 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.488843 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.507086 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" event={"ID":"2b843345-399a-41e3-abe0-f7f41682250a","Type":"ContainerStarted","Data":"858cc76d879eace66d1941a888a8fa733d043d1330d83dfabac66f139a726938"} Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.516688 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" event={"ID":"d708dd00-6c6a-4dd0-ac04-e0b57a753f1f","Type":"ContainerStarted","Data":"c5ea26100aaa21de2768a6e504631da3200ad537d4f5c3b6940fa64ab31743eb"} Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.531017 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-hsldx" podStartSLOduration=3.541155236 podStartE2EDuration="44.531000662s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.311643963 +0000 UTC m=+923.571117889" lastFinishedPulling="2025-12-11 08:39:00.301489389 +0000 UTC m=+964.560963315" observedRunningTime="2025-12-11 08:39:01.527548167 +0000 UTC m=+965.787022083" watchObservedRunningTime="2025-12-11 08:39:01.531000662 +0000 UTC m=+965.790474588" Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.532190 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" event={"ID":"24ddc127-1ac3-4dd9-ae14-c133c9ad387b","Type":"ContainerStarted","Data":"082c173ddb2b10c567384021ecd81c547faa045900914202d32b5695cc232911"} Dec 11 08:39:01 crc kubenswrapper[4992]: I1211 08:39:01.552320 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-l5mc5" podStartSLOduration=2.968087418 podStartE2EDuration="44.552302007s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:18.732080075 +0000 UTC m=+922.991554001" lastFinishedPulling="2025-12-11 08:39:00.316294664 +0000 UTC m=+964.575768590" observedRunningTime="2025-12-11 08:39:01.545972832 +0000 UTC m=+965.805446748" watchObservedRunningTime="2025-12-11 08:39:01.552302007 +0000 UTC m=+965.811775933" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.541484 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" event={"ID":"995a7c64-c843-4200-b1cf-9fe6d774f457","Type":"ContainerStarted","Data":"87fc455c887533cf3c6b1d03dbf23e74fac7084f7cfb570d468ad7b442be2b72"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.542328 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.544159 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.544189 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" event={"ID":"b4d8b09b-a162-43bd-a91f-dc87e5c9c956","Type":"ContainerStarted","Data":"1c8cd4916e48f2a4ee269f2b527d00ca3036ea7b5eca20d309131962a104834e"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.544386 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.545729 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.546409 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" event={"ID":"a39ad598-19ba-42cb-9f35-538b68de7b04","Type":"ContainerStarted","Data":"55f88db666fdb22c18e7c2680dda620809bed9661477bfd432c12bd0f3012a4b"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.546582 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.547897 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.548188 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" event={"ID":"2b843345-399a-41e3-abe0-f7f41682250a","Type":"ContainerStarted","Data":"3e8d6795b0c80245a48239b124a60db86481e966e6b5491f36e0b40e278ae40a"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.550897 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" event={"ID":"3635faed-4894-4eb8-94f7-33b055b860c4","Type":"ContainerStarted","Data":"617fa5daa0d732a8275c6e76661e525fddfd024e11e9fff54984c00c82a8bdf7"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.551286 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.552772 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.554688 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" event={"ID":"94ff8875-2a35-47c0-8da4-1fcc4fd0836e","Type":"ContainerStarted","Data":"83be4e9211ef7f0a27a33d62e8c52a1f710651a5f75f0a003bb7be3a4d880ca5"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.554954 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.556610 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" event={"ID":"2a91887f-977b-43dd-b638-0391348bf5d7","Type":"ContainerStarted","Data":"a02cdb2c95d1ed893dcd4331a114989b145d0f866b6839a5b4dc7c79253d2665"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.556804 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.558385 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.558813 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" event={"ID":"fc892dae-199a-49ca-8ddd-863a6b8426d7","Type":"ContainerStarted","Data":"8848ceae85986e781a9d1b69a04f9ae1bfbd98d7ec04d5acd95bde2c8c634865"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.558906 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.559821 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.561402 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" event={"ID":"d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a","Type":"ContainerStarted","Data":"527ddfc75cdd1cbfc808d59eb47769f48329bff86a251859639ffdc784721d35"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.562384 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.563953 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" event={"ID":"07859dd8-8995-4214-8ee9-6648fa5a292e","Type":"ContainerStarted","Data":"dd81a56f5b71b93f7c01e7e84a540a94ce5b277cf5112b8b2bcd8e7c850ca60e"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.565316 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.566272 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" event={"ID":"d708dd00-6c6a-4dd0-ac04-e0b57a753f1f","Type":"ContainerStarted","Data":"3930acb83f3c2780515fe7476b1ffccba6a0657735bbac796f3bfebaa12a4050"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.566691 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.569699 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" event={"ID":"24ddc127-1ac3-4dd9-ae14-c133c9ad387b","Type":"ContainerStarted","Data":"85a07e400855d4b382aea2ff0b5db2ac044583a9a8412b3a26ef445801b4f622"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.569928 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.573809 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" event={"ID":"ae97f467-cfd0-46c1-a261-36f09387f3e0","Type":"ContainerStarted","Data":"8a61331b129436a4f26e4a62dd108b74cf6f15d4a5ce17124b32c878d67c0e18"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.574841 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.577264 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" event={"ID":"2e7b36cb-508f-46e0-acd1-6eca36c331b1","Type":"ContainerStarted","Data":"c4f91913a2472f760be3772e14b94bfff4daf1a50c23b182646cb6fae8a54ef9"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.577708 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-2rs5h" podStartSLOduration=4.519118574 podStartE2EDuration="45.577686605s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.711831836 +0000 UTC m=+923.971305762" lastFinishedPulling="2025-12-11 08:39:00.770399867 +0000 UTC m=+965.029873793" observedRunningTime="2025-12-11 08:39:02.570970089 +0000 UTC m=+966.830444015" watchObservedRunningTime="2025-12-11 08:39:02.577686605 +0000 UTC m=+966.837160531" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.577887 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.582595 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.582750 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.583669 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" event={"ID":"42106600-d00d-477a-aaec-102ba03cb5c6","Type":"ContainerStarted","Data":"da0a4a9c23a214e4d562774f248c8329b0f3dd2b39675d57b8df13bb421e4c26"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.585572 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.588668 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.596806 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" event={"ID":"2b8e6bee-2aae-4689-898a-b298fd5a3d00","Type":"ContainerStarted","Data":"f26bd9acc04e28f3cdc3485d9a1cea4b861aac0063b193d50bfc76640656e969"} Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.596856 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" Dec 11 08:39:02 crc kubenswrapper[4992]: E1211 08:39:02.601296 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" podUID="74f7d667-67f0-459b-a7a0-f46c0e095485" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.604906 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.618674 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-rph2p" podStartSLOduration=4.00085781 podStartE2EDuration="44.618656126s" podCreationTimestamp="2025-12-11 08:38:18 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.750129992 +0000 UTC m=+924.009603908" lastFinishedPulling="2025-12-11 08:39:00.367928298 +0000 UTC m=+964.627402224" observedRunningTime="2025-12-11 08:39:02.614575755 +0000 UTC m=+966.874049691" watchObservedRunningTime="2025-12-11 08:39:02.618656126 +0000 UTC m=+966.878130052" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.665218 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-6gl95" podStartSLOduration=3.851478963 podStartE2EDuration="45.665200374s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:18.960245404 +0000 UTC m=+923.219719330" lastFinishedPulling="2025-12-11 08:39:00.773966815 +0000 UTC m=+965.033440741" observedRunningTime="2025-12-11 08:39:02.6597804 +0000 UTC m=+966.919254346" watchObservedRunningTime="2025-12-11 08:39:02.665200374 +0000 UTC m=+966.924674310" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.750258 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" podStartSLOduration=3.784247374 podStartE2EDuration="44.750235422s" podCreationTimestamp="2025-12-11 08:38:18 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.74442827 +0000 UTC m=+924.003902196" lastFinishedPulling="2025-12-11 08:39:00.710416318 +0000 UTC m=+964.969890244" observedRunningTime="2025-12-11 08:39:02.740730527 +0000 UTC m=+967.000204463" watchObservedRunningTime="2025-12-11 08:39:02.750235422 +0000 UTC m=+967.009709348" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.788896 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-jqlq7" podStartSLOduration=4.554819524 podStartE2EDuration="45.788877045s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.329180296 +0000 UTC m=+923.588654222" lastFinishedPulling="2025-12-11 08:39:00.563237817 +0000 UTC m=+964.822711743" observedRunningTime="2025-12-11 08:39:02.786983128 +0000 UTC m=+967.046457064" watchObservedRunningTime="2025-12-11 08:39:02.788877045 +0000 UTC m=+967.048350971" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.828247 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" podStartSLOduration=41.423683511 podStartE2EDuration="45.828220465s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:55.838754229 +0000 UTC m=+960.098228155" lastFinishedPulling="2025-12-11 08:39:00.243291183 +0000 UTC m=+964.502765109" observedRunningTime="2025-12-11 08:39:02.816016815 +0000 UTC m=+967.075490741" watchObservedRunningTime="2025-12-11 08:39:02.828220465 +0000 UTC m=+967.087694391" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.917192 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" podStartSLOduration=3.577930604 podStartE2EDuration="45.91717164s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.741845307 +0000 UTC m=+924.001319233" lastFinishedPulling="2025-12-11 08:39:02.081086343 +0000 UTC m=+966.340560269" observedRunningTime="2025-12-11 08:39:02.85635015 +0000 UTC m=+967.115824076" watchObservedRunningTime="2025-12-11 08:39:02.91717164 +0000 UTC m=+967.176645566" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.921774 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" podStartSLOduration=41.477836237 podStartE2EDuration="45.921754413s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:55.844834419 +0000 UTC m=+960.104308345" lastFinishedPulling="2025-12-11 08:39:00.288752595 +0000 UTC m=+964.548226521" observedRunningTime="2025-12-11 08:39:02.911134191 +0000 UTC m=+967.170608117" watchObservedRunningTime="2025-12-11 08:39:02.921754413 +0000 UTC m=+967.181228369" Dec 11 08:39:02 crc kubenswrapper[4992]: I1211 08:39:02.983356 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-r9hft" podStartSLOduration=5.107473218 podStartE2EDuration="45.983327232s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.687385113 +0000 UTC m=+923.946859039" lastFinishedPulling="2025-12-11 08:39:00.563239127 +0000 UTC m=+964.822713053" observedRunningTime="2025-12-11 08:39:02.95444311 +0000 UTC m=+967.213917046" watchObservedRunningTime="2025-12-11 08:39:02.983327232 +0000 UTC m=+967.242801168" Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.027554 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-r792s" podStartSLOduration=5.324101133 podStartE2EDuration="46.027526523s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.694095788 +0000 UTC m=+923.953569724" lastFinishedPulling="2025-12-11 08:39:00.397521188 +0000 UTC m=+964.656995114" observedRunningTime="2025-12-11 08:39:03.00838571 +0000 UTC m=+967.267859636" watchObservedRunningTime="2025-12-11 08:39:03.027526523 +0000 UTC m=+967.287000449" Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.089361 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-d9jmh" podStartSLOduration=4.019905309 podStartE2EDuration="45.089341228s" podCreationTimestamp="2025-12-11 08:38:18 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.758147599 +0000 UTC m=+924.017621525" lastFinishedPulling="2025-12-11 08:39:00.827583518 +0000 UTC m=+965.087057444" observedRunningTime="2025-12-11 08:39:03.085952435 +0000 UTC m=+967.345426361" watchObservedRunningTime="2025-12-11 08:39:03.089341228 +0000 UTC m=+967.348815154" Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.116860 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-bdp4x" podStartSLOduration=4.5242346300000005 podStartE2EDuration="46.116838136s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.359814802 +0000 UTC m=+923.619288718" lastFinishedPulling="2025-12-11 08:39:00.952418298 +0000 UTC m=+965.211892224" observedRunningTime="2025-12-11 08:39:03.115295278 +0000 UTC m=+967.374769204" watchObservedRunningTime="2025-12-11 08:39:03.116838136 +0000 UTC m=+967.376312072" Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.154221 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-l54jq" podStartSLOduration=3.999215807 podStartE2EDuration="45.154199928s" podCreationTimestamp="2025-12-11 08:38:18 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.744280306 +0000 UTC m=+924.003754232" lastFinishedPulling="2025-12-11 08:39:00.899264427 +0000 UTC m=+965.158738353" observedRunningTime="2025-12-11 08:39:03.153262595 +0000 UTC m=+967.412736521" watchObservedRunningTime="2025-12-11 08:39:03.154199928 +0000 UTC m=+967.413673854" Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.602341 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" event={"ID":"aa6fcfad-b39a-4621-aebe-0b48a4106495","Type":"ContainerStarted","Data":"da282201ada06d78f123158c4617c9a931e06b704e698d22a297e1a3c9741bee"} Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.602491 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.603795 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" event={"ID":"22b393e2-e34e-4f47-a8f8-136d9a6613f6","Type":"ContainerStarted","Data":"588dd49aafede1f115ca2097bb878c12901e549fd6d3fd13b2c7fd8abf27bb53"} Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.621414 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-7nw5b" podStartSLOduration=5.593295154 podStartE2EDuration="46.621394024s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.74196868 +0000 UTC m=+924.001442606" lastFinishedPulling="2025-12-11 08:39:00.77006755 +0000 UTC m=+965.029541476" observedRunningTime="2025-12-11 08:39:03.192224755 +0000 UTC m=+967.451698691" watchObservedRunningTime="2025-12-11 08:39:03.621394024 +0000 UTC m=+967.880867960" Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.624515 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" podStartSLOduration=3.644454245 podStartE2EDuration="46.624506131s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.686252295 +0000 UTC m=+923.945726221" lastFinishedPulling="2025-12-11 08:39:02.666304181 +0000 UTC m=+966.925778107" observedRunningTime="2025-12-11 08:39:03.619454986 +0000 UTC m=+967.878928922" watchObservedRunningTime="2025-12-11 08:39:03.624506131 +0000 UTC m=+967.883980057" Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.642877 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" podStartSLOduration=3.9996434069999998 podStartE2EDuration="46.642852613s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.725162745 +0000 UTC m=+923.984636671" lastFinishedPulling="2025-12-11 08:39:02.368371951 +0000 UTC m=+966.627845877" observedRunningTime="2025-12-11 08:39:03.635740928 +0000 UTC m=+967.895214854" watchObservedRunningTime="2025-12-11 08:39:03.642852613 +0000 UTC m=+967.902326549" Dec 11 08:39:03 crc kubenswrapper[4992]: I1211 08:39:03.654033 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" podStartSLOduration=4.168116093 podStartE2EDuration="46.654013728s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.717898065 +0000 UTC m=+923.977371991" lastFinishedPulling="2025-12-11 08:39:02.20379571 +0000 UTC m=+966.463269626" observedRunningTime="2025-12-11 08:39:03.652511001 +0000 UTC m=+967.911984937" watchObservedRunningTime="2025-12-11 08:39:03.654013728 +0000 UTC m=+967.913487654" Dec 11 08:39:04 crc kubenswrapper[4992]: I1211 08:39:04.612488 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" event={"ID":"07859dd8-8995-4214-8ee9-6648fa5a292e","Type":"ContainerStarted","Data":"8047beabdeea6484a24d0a4f3b7d986f343a4e7d869a2f52e7d89225b2f6a4b1"} Dec 11 08:39:04 crc kubenswrapper[4992]: I1211 08:39:04.614177 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" Dec 11 08:39:05 crc kubenswrapper[4992]: I1211 08:39:05.379047 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:39:05 crc kubenswrapper[4992]: I1211 08:39:05.379111 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:39:05 crc kubenswrapper[4992]: I1211 08:39:05.620416 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" Dec 11 08:39:08 crc kubenswrapper[4992]: I1211 08:39:08.401186 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" Dec 11 08:39:08 crc kubenswrapper[4992]: I1211 08:39:08.404964 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-4lp9k" Dec 11 08:39:08 crc kubenswrapper[4992]: I1211 08:39:08.426959 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" podStartSLOduration=6.638773417 podStartE2EDuration="51.426936851s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.302940298 +0000 UTC m=+923.562414234" lastFinishedPulling="2025-12-11 08:39:04.091103732 +0000 UTC m=+968.350577668" observedRunningTime="2025-12-11 08:39:04.633910353 +0000 UTC m=+968.893384279" watchObservedRunningTime="2025-12-11 08:39:08.426936851 +0000 UTC m=+972.686410777" Dec 11 08:39:08 crc kubenswrapper[4992]: I1211 08:39:08.448400 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-8q2xn" Dec 11 08:39:08 crc kubenswrapper[4992]: I1211 08:39:08.503576 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-rd8j7" Dec 11 08:39:08 crc kubenswrapper[4992]: I1211 08:39:08.561034 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-9r2v8" Dec 11 08:39:09 crc kubenswrapper[4992]: I1211 08:39:09.898540 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-hd9fc" Dec 11 08:39:10 crc kubenswrapper[4992]: I1211 08:39:10.814702 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5dfd9f965d-t689z" Dec 11 08:39:14 crc kubenswrapper[4992]: I1211 08:39:14.324985 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f5wrjt" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.754394 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v8wmm"] Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.756653 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.778723 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v8wmm"] Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.862977 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-55h47" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.876316 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-utilities\") pod \"community-operators-v8wmm\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.876378 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njlk9\" (UniqueName: \"kubernetes.io/projected/3cf89354-d5eb-4100-9972-a1ee48f0f123-kube-api-access-njlk9\") pod \"community-operators-v8wmm\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.876407 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-catalog-content\") pod \"community-operators-v8wmm\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.977869 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-utilities\") pod \"community-operators-v8wmm\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.978260 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njlk9\" (UniqueName: \"kubernetes.io/projected/3cf89354-d5eb-4100-9972-a1ee48f0f123-kube-api-access-njlk9\") pod \"community-operators-v8wmm\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.978291 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-catalog-content\") pod \"community-operators-v8wmm\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.978809 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-catalog-content\") pod \"community-operators-v8wmm\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:17 crc kubenswrapper[4992]: I1211 08:39:17.979077 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-utilities\") pod \"community-operators-v8wmm\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:18 crc kubenswrapper[4992]: I1211 08:39:18.003992 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njlk9\" (UniqueName: \"kubernetes.io/projected/3cf89354-d5eb-4100-9972-a1ee48f0f123-kube-api-access-njlk9\") pod \"community-operators-v8wmm\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:18 crc kubenswrapper[4992]: I1211 08:39:18.082440 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:18 crc kubenswrapper[4992]: I1211 08:39:18.417000 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v8wmm"] Dec 11 08:39:18 crc kubenswrapper[4992]: I1211 08:39:18.716405 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8wmm" event={"ID":"3cf89354-d5eb-4100-9972-a1ee48f0f123","Type":"ContainerStarted","Data":"bf6d5b679cf799f859fa68450fedc8703a43e019cdb01db4cc9dd0e983cce30c"} Dec 11 08:39:19 crc kubenswrapper[4992]: I1211 08:39:19.725255 4992 generic.go:334] "Generic (PLEG): container finished" podID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerID="7a388fe431d05185c85a67ec290b4888439476ba495b8656797163fe1a0de247" exitCode=0 Dec 11 08:39:19 crc kubenswrapper[4992]: I1211 08:39:19.725334 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8wmm" event={"ID":"3cf89354-d5eb-4100-9972-a1ee48f0f123","Type":"ContainerDied","Data":"7a388fe431d05185c85a67ec290b4888439476ba495b8656797163fe1a0de247"} Dec 11 08:39:20 crc kubenswrapper[4992]: I1211 08:39:20.733909 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" event={"ID":"74f7d667-67f0-459b-a7a0-f46c0e095485","Type":"ContainerStarted","Data":"83b6f12ac5a3bf8fdbb910b962d0ff59b026b2df6c983ed951adf15bd1157bc9"} Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.134129 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fxrcn"] Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.137201 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.155846 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxrcn"] Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.157767 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frjvw\" (UniqueName: \"kubernetes.io/projected/6ac3833a-d349-476d-9759-28824233d07a-kube-api-access-frjvw\") pod \"certified-operators-fxrcn\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.157911 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-catalog-content\") pod \"certified-operators-fxrcn\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.158020 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-utilities\") pod \"certified-operators-fxrcn\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.259297 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frjvw\" (UniqueName: \"kubernetes.io/projected/6ac3833a-d349-476d-9759-28824233d07a-kube-api-access-frjvw\") pod \"certified-operators-fxrcn\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.259355 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-catalog-content\") pod \"certified-operators-fxrcn\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.259379 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-utilities\") pod \"certified-operators-fxrcn\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.259943 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-catalog-content\") pod \"certified-operators-fxrcn\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.260030 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-utilities\") pod \"certified-operators-fxrcn\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.287149 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frjvw\" (UniqueName: \"kubernetes.io/projected/6ac3833a-d349-476d-9759-28824233d07a-kube-api-access-frjvw\") pod \"certified-operators-fxrcn\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.465307 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:22 crc kubenswrapper[4992]: I1211 08:39:22.981132 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fxrcn"] Dec 11 08:39:23 crc kubenswrapper[4992]: I1211 08:39:23.762999 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxrcn" event={"ID":"6ac3833a-d349-476d-9759-28824233d07a","Type":"ContainerStarted","Data":"697f3f6c361e1f1c9f6f71a0894b7be5b5e41d6fdd1a35d756a6b1992a35ba22"} Dec 11 08:39:26 crc kubenswrapper[4992]: I1211 08:39:26.785924 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" Dec 11 08:39:26 crc kubenswrapper[4992]: I1211 08:39:26.787661 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" Dec 11 08:39:26 crc kubenswrapper[4992]: I1211 08:39:26.807922 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-l6gnl" podStartSLOduration=9.880433226 podStartE2EDuration="1m9.807899006s" podCreationTimestamp="2025-12-11 08:38:17 +0000 UTC" firstStartedPulling="2025-12-11 08:38:19.7018325 +0000 UTC m=+923.961306426" lastFinishedPulling="2025-12-11 08:39:19.62929827 +0000 UTC m=+983.888772206" observedRunningTime="2025-12-11 08:39:26.802085253 +0000 UTC m=+991.061559199" watchObservedRunningTime="2025-12-11 08:39:26.807899006 +0000 UTC m=+991.067372942" Dec 11 08:39:35 crc kubenswrapper[4992]: I1211 08:39:35.378206 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:39:35 crc kubenswrapper[4992]: I1211 08:39:35.378799 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:39:35 crc kubenswrapper[4992]: I1211 08:39:35.378847 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:39:35 crc kubenswrapper[4992]: I1211 08:39:35.379489 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"052d1b39952568f7c7dadc00d816b97c8f69c2e12d851ed0f8503ebf05896a23"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 08:39:35 crc kubenswrapper[4992]: I1211 08:39:35.379553 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://052d1b39952568f7c7dadc00d816b97c8f69c2e12d851ed0f8503ebf05896a23" gracePeriod=600 Dec 11 08:39:37 crc kubenswrapper[4992]: I1211 08:39:37.906889 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8wmm" event={"ID":"3cf89354-d5eb-4100-9972-a1ee48f0f123","Type":"ContainerStarted","Data":"bf082b285a3ed56b0ef085d1e12c55d1db74418ecf93b0fb92be3c4f7707992d"} Dec 11 08:39:37 crc kubenswrapper[4992]: I1211 08:39:37.909947 4992 generic.go:334] "Generic (PLEG): container finished" podID="6ac3833a-d349-476d-9759-28824233d07a" containerID="cde6e8e41e2f0b69441907cac064da1d5815927c8a5f05ba8ed96675d2b0c0d6" exitCode=0 Dec 11 08:39:37 crc kubenswrapper[4992]: I1211 08:39:37.910461 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxrcn" event={"ID":"6ac3833a-d349-476d-9759-28824233d07a","Type":"ContainerDied","Data":"cde6e8e41e2f0b69441907cac064da1d5815927c8a5f05ba8ed96675d2b0c0d6"} Dec 11 08:39:37 crc kubenswrapper[4992]: I1211 08:39:37.913395 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="052d1b39952568f7c7dadc00d816b97c8f69c2e12d851ed0f8503ebf05896a23" exitCode=0 Dec 11 08:39:37 crc kubenswrapper[4992]: I1211 08:39:37.913435 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"052d1b39952568f7c7dadc00d816b97c8f69c2e12d851ed0f8503ebf05896a23"} Dec 11 08:39:37 crc kubenswrapper[4992]: I1211 08:39:37.913465 4992 scope.go:117] "RemoveContainer" containerID="60689b85e9d0e4eef61ab75310d16d21a29edde0bcacd67f8fb3fabf7eaa5ca7" Dec 11 08:39:38 crc kubenswrapper[4992]: I1211 08:39:38.920443 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"d64a7d32f88a68b108a9286da7fc154fed7c669f9f13fdf26c97611e89c34eb5"} Dec 11 08:39:38 crc kubenswrapper[4992]: I1211 08:39:38.923107 4992 generic.go:334] "Generic (PLEG): container finished" podID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerID="bf082b285a3ed56b0ef085d1e12c55d1db74418ecf93b0fb92be3c4f7707992d" exitCode=0 Dec 11 08:39:38 crc kubenswrapper[4992]: I1211 08:39:38.923137 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8wmm" event={"ID":"3cf89354-d5eb-4100-9972-a1ee48f0f123","Type":"ContainerDied","Data":"bf082b285a3ed56b0ef085d1e12c55d1db74418ecf93b0fb92be3c4f7707992d"} Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.154466 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j5prz"] Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.156532 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.174084 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5prz"] Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.320792 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-catalog-content\") pod \"redhat-marketplace-j5prz\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.320860 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-utilities\") pod \"redhat-marketplace-j5prz\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.320883 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6vv6\" (UniqueName: \"kubernetes.io/projected/d948d471-7cda-4f09-a493-f28c8fb7f439-kube-api-access-n6vv6\") pod \"redhat-marketplace-j5prz\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.422294 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6vv6\" (UniqueName: \"kubernetes.io/projected/d948d471-7cda-4f09-a493-f28c8fb7f439-kube-api-access-n6vv6\") pod \"redhat-marketplace-j5prz\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.423036 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-catalog-content\") pod \"redhat-marketplace-j5prz\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.423162 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-utilities\") pod \"redhat-marketplace-j5prz\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.423575 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-catalog-content\") pod \"redhat-marketplace-j5prz\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.423595 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-utilities\") pod \"redhat-marketplace-j5prz\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.456558 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6vv6\" (UniqueName: \"kubernetes.io/projected/d948d471-7cda-4f09-a493-f28c8fb7f439-kube-api-access-n6vv6\") pod \"redhat-marketplace-j5prz\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.487608 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.932885 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxrcn" event={"ID":"6ac3833a-d349-476d-9759-28824233d07a","Type":"ContainerStarted","Data":"c8962fcd937a33dab796040b7988f0292bf0f06a7465d55df63c94814dbafcc7"} Dec 11 08:39:39 crc kubenswrapper[4992]: I1211 08:39:39.970875 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5prz"] Dec 11 08:39:40 crc kubenswrapper[4992]: I1211 08:39:40.942316 4992 generic.go:334] "Generic (PLEG): container finished" podID="6ac3833a-d349-476d-9759-28824233d07a" containerID="c8962fcd937a33dab796040b7988f0292bf0f06a7465d55df63c94814dbafcc7" exitCode=0 Dec 11 08:39:40 crc kubenswrapper[4992]: I1211 08:39:40.942388 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxrcn" event={"ID":"6ac3833a-d349-476d-9759-28824233d07a","Type":"ContainerDied","Data":"c8962fcd937a33dab796040b7988f0292bf0f06a7465d55df63c94814dbafcc7"} Dec 11 08:39:40 crc kubenswrapper[4992]: I1211 08:39:40.945972 4992 generic.go:334] "Generic (PLEG): container finished" podID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerID="9d2339ffa603209a57a3aa82d10714bbf633ce82f4cc2c1f0ef2a4f37ed411ed" exitCode=0 Dec 11 08:39:40 crc kubenswrapper[4992]: I1211 08:39:40.946009 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5prz" event={"ID":"d948d471-7cda-4f09-a493-f28c8fb7f439","Type":"ContainerDied","Data":"9d2339ffa603209a57a3aa82d10714bbf633ce82f4cc2c1f0ef2a4f37ed411ed"} Dec 11 08:39:40 crc kubenswrapper[4992]: I1211 08:39:40.946032 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5prz" event={"ID":"d948d471-7cda-4f09-a493-f28c8fb7f439","Type":"ContainerStarted","Data":"cf1938675dfff08699351e9e021c9c2723d5881ecc5811fcf3b812e54c6927be"} Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.470778 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6clvs"] Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.471975 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.475307 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.475512 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.475667 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-csss8" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.478382 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.486671 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6clvs"] Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.550905 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8325fbe7-218b-4d83-9973-6967fa0a726c-config\") pod \"dnsmasq-dns-675f4bcbfc-6clvs\" (UID: \"8325fbe7-218b-4d83-9973-6967fa0a726c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.551122 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bgqh\" (UniqueName: \"kubernetes.io/projected/8325fbe7-218b-4d83-9973-6967fa0a726c-kube-api-access-5bgqh\") pod \"dnsmasq-dns-675f4bcbfc-6clvs\" (UID: \"8325fbe7-218b-4d83-9973-6967fa0a726c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.553195 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gzbq4"] Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.556302 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.559835 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.572195 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gzbq4"] Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.652278 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bgqh\" (UniqueName: \"kubernetes.io/projected/8325fbe7-218b-4d83-9973-6967fa0a726c-kube-api-access-5bgqh\") pod \"dnsmasq-dns-675f4bcbfc-6clvs\" (UID: \"8325fbe7-218b-4d83-9973-6967fa0a726c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.653003 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8325fbe7-218b-4d83-9973-6967fa0a726c-config\") pod \"dnsmasq-dns-675f4bcbfc-6clvs\" (UID: \"8325fbe7-218b-4d83-9973-6967fa0a726c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.653038 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gzbq4\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.653076 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-config\") pod \"dnsmasq-dns-78dd6ddcc-gzbq4\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.653156 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz727\" (UniqueName: \"kubernetes.io/projected/9db5376c-59e4-4552-9e55-cc6f9df24e5f-kube-api-access-kz727\") pod \"dnsmasq-dns-78dd6ddcc-gzbq4\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.654118 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8325fbe7-218b-4d83-9973-6967fa0a726c-config\") pod \"dnsmasq-dns-675f4bcbfc-6clvs\" (UID: \"8325fbe7-218b-4d83-9973-6967fa0a726c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.681463 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bgqh\" (UniqueName: \"kubernetes.io/projected/8325fbe7-218b-4d83-9973-6967fa0a726c-kube-api-access-5bgqh\") pod \"dnsmasq-dns-675f4bcbfc-6clvs\" (UID: \"8325fbe7-218b-4d83-9973-6967fa0a726c\") " pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.753602 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz727\" (UniqueName: \"kubernetes.io/projected/9db5376c-59e4-4552-9e55-cc6f9df24e5f-kube-api-access-kz727\") pod \"dnsmasq-dns-78dd6ddcc-gzbq4\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.753693 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gzbq4\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.753721 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-config\") pod \"dnsmasq-dns-78dd6ddcc-gzbq4\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.754526 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-config\") pod \"dnsmasq-dns-78dd6ddcc-gzbq4\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.754802 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gzbq4\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.770040 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz727\" (UniqueName: \"kubernetes.io/projected/9db5376c-59e4-4552-9e55-cc6f9df24e5f-kube-api-access-kz727\") pod \"dnsmasq-dns-78dd6ddcc-gzbq4\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.796184 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:39:41 crc kubenswrapper[4992]: I1211 08:39:41.871462 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:39:42 crc kubenswrapper[4992]: I1211 08:39:42.042451 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6clvs"] Dec 11 08:39:42 crc kubenswrapper[4992]: W1211 08:39:42.054533 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8325fbe7_218b_4d83_9973_6967fa0a726c.slice/crio-7ce20d9b7e8259815b9863e5cdbb9da5f17c6c30eb20ef32bf30442a508f568f WatchSource:0}: Error finding container 7ce20d9b7e8259815b9863e5cdbb9da5f17c6c30eb20ef32bf30442a508f568f: Status 404 returned error can't find the container with id 7ce20d9b7e8259815b9863e5cdbb9da5f17c6c30eb20ef32bf30442a508f568f Dec 11 08:39:42 crc kubenswrapper[4992]: I1211 08:39:42.352728 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gzbq4"] Dec 11 08:39:42 crc kubenswrapper[4992]: I1211 08:39:42.969978 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" event={"ID":"9db5376c-59e4-4552-9e55-cc6f9df24e5f","Type":"ContainerStarted","Data":"7b741b883b63595e0b55da11bec58f51c52b2a6a6ee6c67fd5ed90621340582c"} Dec 11 08:39:42 crc kubenswrapper[4992]: I1211 08:39:42.978892 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxrcn" event={"ID":"6ac3833a-d349-476d-9759-28824233d07a","Type":"ContainerStarted","Data":"362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091"} Dec 11 08:39:42 crc kubenswrapper[4992]: I1211 08:39:42.986013 4992 generic.go:334] "Generic (PLEG): container finished" podID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerID="9f25a2c428aeb958a256686ed2897765cd85a3e812a93b45dc423b275295517e" exitCode=0 Dec 11 08:39:42 crc kubenswrapper[4992]: I1211 08:39:42.986144 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5prz" event={"ID":"d948d471-7cda-4f09-a493-f28c8fb7f439","Type":"ContainerDied","Data":"9f25a2c428aeb958a256686ed2897765cd85a3e812a93b45dc423b275295517e"} Dec 11 08:39:42 crc kubenswrapper[4992]: I1211 08:39:42.995130 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" event={"ID":"8325fbe7-218b-4d83-9973-6967fa0a726c","Type":"ContainerStarted","Data":"7ce20d9b7e8259815b9863e5cdbb9da5f17c6c30eb20ef32bf30442a508f568f"} Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:42.997271 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fxrcn" podStartSLOduration=17.263635532 podStartE2EDuration="20.99725486s" podCreationTimestamp="2025-12-11 08:39:22 +0000 UTC" firstStartedPulling="2025-12-11 08:39:37.911279364 +0000 UTC m=+1002.170753290" lastFinishedPulling="2025-12-11 08:39:41.644898692 +0000 UTC m=+1005.904372618" observedRunningTime="2025-12-11 08:39:42.995846356 +0000 UTC m=+1007.255320302" watchObservedRunningTime="2025-12-11 08:39:42.99725486 +0000 UTC m=+1007.256728786" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.352537 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6clvs"] Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.396539 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qjhqv"] Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.397882 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.403397 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qjhqv"] Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.585965 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qjhqv\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.586081 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxljl\" (UniqueName: \"kubernetes.io/projected/e3b993d2-9421-48c0-b4fc-b9eef93186f4-kube-api-access-qxljl\") pod \"dnsmasq-dns-666b6646f7-qjhqv\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.586115 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-config\") pod \"dnsmasq-dns-666b6646f7-qjhqv\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.687719 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-config\") pod \"dnsmasq-dns-666b6646f7-qjhqv\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.688036 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qjhqv\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.688307 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxljl\" (UniqueName: \"kubernetes.io/projected/e3b993d2-9421-48c0-b4fc-b9eef93186f4-kube-api-access-qxljl\") pod \"dnsmasq-dns-666b6646f7-qjhqv\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.688518 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-config\") pod \"dnsmasq-dns-666b6646f7-qjhqv\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.689086 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qjhqv\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.727617 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxljl\" (UniqueName: \"kubernetes.io/projected/e3b993d2-9421-48c0-b4fc-b9eef93186f4-kube-api-access-qxljl\") pod \"dnsmasq-dns-666b6646f7-qjhqv\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.740973 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.845182 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gzbq4"] Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.872674 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ckhzn"] Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.883446 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.902341 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ckhzn"] Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.994468 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-config\") pod \"dnsmasq-dns-57d769cc4f-ckhzn\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.994561 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6sf\" (UniqueName: \"kubernetes.io/projected/2fc10a15-b026-4617-b650-5eb0f8af0299-kube-api-access-fv6sf\") pod \"dnsmasq-dns-57d769cc4f-ckhzn\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:43 crc kubenswrapper[4992]: I1211 08:39:43.994602 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ckhzn\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.096334 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-config\") pod \"dnsmasq-dns-57d769cc4f-ckhzn\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.096738 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6sf\" (UniqueName: \"kubernetes.io/projected/2fc10a15-b026-4617-b650-5eb0f8af0299-kube-api-access-fv6sf\") pod \"dnsmasq-dns-57d769cc4f-ckhzn\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.096775 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ckhzn\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.097609 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-ckhzn\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.109557 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-config\") pod \"dnsmasq-dns-57d769cc4f-ckhzn\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.121160 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6sf\" (UniqueName: \"kubernetes.io/projected/2fc10a15-b026-4617-b650-5eb0f8af0299-kube-api-access-fv6sf\") pod \"dnsmasq-dns-57d769cc4f-ckhzn\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.206398 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.553116 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.555050 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.556773 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.561499 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.561719 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.561996 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.562059 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.562856 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.563145 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-twq5q" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.579897 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qjhqv"] Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.602855 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709337 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709395 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709430 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709455 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709475 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72f9411b-61f4-4615-8653-5f90b629690d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709496 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709544 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmbzl\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-kube-api-access-wmbzl\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709608 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-config-data\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709670 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709790 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.709880 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72f9411b-61f4-4615-8653-5f90b629690d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.786676 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ckhzn"] Dec 11 08:39:44 crc kubenswrapper[4992]: W1211 08:39:44.796188 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fc10a15_b026_4617_b650_5eb0f8af0299.slice/crio-f1b9a3fa8e09110c70b955c831ca3ecd05150bc0baf0fa6a5c896caf662b75dd WatchSource:0}: Error finding container f1b9a3fa8e09110c70b955c831ca3ecd05150bc0baf0fa6a5c896caf662b75dd: Status 404 returned error can't find the container with id f1b9a3fa8e09110c70b955c831ca3ecd05150bc0baf0fa6a5c896caf662b75dd Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.811615 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-config-data\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.811740 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.811786 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.811844 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72f9411b-61f4-4615-8653-5f90b629690d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.811903 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.811941 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.812175 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.812219 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.812813 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.812903 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.812928 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72f9411b-61f4-4615-8653-5f90b629690d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.812951 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.813049 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmbzl\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-kube-api-access-wmbzl\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.813677 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.813843 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.813894 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.814277 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-config-data\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.818935 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.819591 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.819903 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72f9411b-61f4-4615-8653-5f90b629690d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.831820 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmbzl\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-kube-api-access-wmbzl\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.831989 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72f9411b-61f4-4615-8653-5f90b629690d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.874298 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.897285 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.987338 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.990242 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.998848 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.999153 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.999297 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.999432 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.999571 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.999805 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-kg67r" Dec 11 08:39:44 crc kubenswrapper[4992]: I1211 08:39:44.999962 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.000678 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.024799 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8wmm" event={"ID":"3cf89354-d5eb-4100-9972-a1ee48f0f123","Type":"ContainerStarted","Data":"941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648"} Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.047086 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5prz" event={"ID":"d948d471-7cda-4f09-a493-f28c8fb7f439","Type":"ContainerStarted","Data":"e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681"} Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.056492 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" event={"ID":"2fc10a15-b026-4617-b650-5eb0f8af0299","Type":"ContainerStarted","Data":"f1b9a3fa8e09110c70b955c831ca3ecd05150bc0baf0fa6a5c896caf662b75dd"} Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.066917 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" event={"ID":"e3b993d2-9421-48c0-b4fc-b9eef93186f4","Type":"ContainerStarted","Data":"c573f8476754e45c1ebbf0caa8fd5cf3e741daa7823f52f074019f1b48a52b21"} Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.094978 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v8wmm" podStartSLOduration=3.823272366 podStartE2EDuration="28.094953148s" podCreationTimestamp="2025-12-11 08:39:17 +0000 UTC" firstStartedPulling="2025-12-11 08:39:19.727149938 +0000 UTC m=+983.986623864" lastFinishedPulling="2025-12-11 08:39:43.99883072 +0000 UTC m=+1008.258304646" observedRunningTime="2025-12-11 08:39:45.071398891 +0000 UTC m=+1009.330872817" watchObservedRunningTime="2025-12-11 08:39:45.094953148 +0000 UTC m=+1009.354427074" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.099444 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j5prz" podStartSLOduration=2.968791691 podStartE2EDuration="6.099423038s" podCreationTimestamp="2025-12-11 08:39:39 +0000 UTC" firstStartedPulling="2025-12-11 08:39:40.947369154 +0000 UTC m=+1005.206843070" lastFinishedPulling="2025-12-11 08:39:44.078000481 +0000 UTC m=+1008.337474417" observedRunningTime="2025-12-11 08:39:45.093740488 +0000 UTC m=+1009.353214424" watchObservedRunningTime="2025-12-11 08:39:45.099423038 +0000 UTC m=+1009.358896964" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.119805 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6hf\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-kube-api-access-wt6hf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.119862 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.119895 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.119926 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.119954 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.120139 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.120253 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a0fa5ac-9268-4db9-8e40-42aca5111af9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.120341 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.120405 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.120579 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a0fa5ac-9268-4db9-8e40-42aca5111af9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.120703 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.222240 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt6hf\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-kube-api-access-wt6hf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.222281 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.222327 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.222358 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.222392 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.222440 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.222931 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a0fa5ac-9268-4db9-8e40-42aca5111af9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.223001 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.223054 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.223285 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a0fa5ac-9268-4db9-8e40-42aca5111af9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.223324 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.224676 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.225080 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.225426 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.225838 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.227942 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.228097 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.244424 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a0fa5ac-9268-4db9-8e40-42aca5111af9-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.244708 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.247267 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt6hf\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-kube-api-access-wt6hf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.247276 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a0fa5ac-9268-4db9-8e40-42aca5111af9-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.252603 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.257368 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.327266 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.600116 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 08:39:45 crc kubenswrapper[4992]: I1211 08:39:45.833421 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 08:39:45 crc kubenswrapper[4992]: W1211 08:39:45.894238 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a0fa5ac_9268_4db9_8e40_42aca5111af9.slice/crio-3a924922ce8a768d65894f79c30b24e8a3825e36ede065cb63188abf2cbc1543 WatchSource:0}: Error finding container 3a924922ce8a768d65894f79c30b24e8a3825e36ede065cb63188abf2cbc1543: Status 404 returned error can't find the container with id 3a924922ce8a768d65894f79c30b24e8a3825e36ede065cb63188abf2cbc1543 Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.089323 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72f9411b-61f4-4615-8653-5f90b629690d","Type":"ContainerStarted","Data":"e06ef21aa6cecd1bbd3da957f5db0737eaf9090187ec504a1c9ed94acdac9096"} Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.092278 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a0fa5ac-9268-4db9-8e40-42aca5111af9","Type":"ContainerStarted","Data":"3a924922ce8a768d65894f79c30b24e8a3825e36ede065cb63188abf2cbc1543"} Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.324071 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.325446 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.330540 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.331896 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.331986 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.331994 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4qvxg" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.339217 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.353981 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.455445 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.455579 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-config-data-default\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.455610 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.455787 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.455861 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltmhl\" (UniqueName: \"kubernetes.io/projected/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-kube-api-access-ltmhl\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.456003 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-kolla-config\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.456134 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.456203 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.557754 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-kolla-config\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.558047 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.558092 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.558123 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.558181 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-config-data-default\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.558204 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.558265 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.558300 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltmhl\" (UniqueName: \"kubernetes.io/projected/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-kube-api-access-ltmhl\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.558364 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.558984 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-kolla-config\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.560296 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.566882 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-config-data-default\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.571842 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.577911 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.581325 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltmhl\" (UniqueName: \"kubernetes.io/projected/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-kube-api-access-ltmhl\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.599443 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.639411 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67\") " pod="openstack/openstack-galera-0" Dec 11 08:39:46 crc kubenswrapper[4992]: I1211 08:39:46.664876 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.210874 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 08:39:47 crc kubenswrapper[4992]: W1211 08:39:47.230925 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9cb8c6e_1bff_4d44_b4ee_f91f285f4f67.slice/crio-2cfafa75ce7420a34c577e46fbb5a1f9aaba8103af8b4e84276a41bba538c586 WatchSource:0}: Error finding container 2cfafa75ce7420a34c577e46fbb5a1f9aaba8103af8b4e84276a41bba538c586: Status 404 returned error can't find the container with id 2cfafa75ce7420a34c577e46fbb5a1f9aaba8103af8b4e84276a41bba538c586 Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.757823 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.760264 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.764394 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.764694 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.764773 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.765234 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gps57" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.771548 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.884647 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.884752 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.884825 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.884864 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.884955 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsfb8\" (UniqueName: \"kubernetes.io/projected/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-kube-api-access-fsfb8\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.885464 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.885538 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.885604 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.987363 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.987418 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.987449 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.987496 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.987514 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.987531 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.987555 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.987592 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsfb8\" (UniqueName: \"kubernetes.io/projected/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-kube-api-access-fsfb8\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.993199 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.996068 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.996564 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.996887 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:47 crc kubenswrapper[4992]: I1211 08:39:47.997996 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.000333 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.019836 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.020627 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsfb8\" (UniqueName: \"kubernetes.io/projected/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-kube-api-access-fsfb8\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.023479 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5eb79c-8f1c-4416-ab38-00b67e0b3f86-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86\") " pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.083036 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.083114 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.093575 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.173846 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67","Type":"ContainerStarted","Data":"2cfafa75ce7420a34c577e46fbb5a1f9aaba8103af8b4e84276a41bba538c586"} Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.173897 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.175152 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.175184 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.175579 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.182470 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-658tv" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.182683 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.182849 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.293359 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9e5a806-cf0a-4149-81d7-803170a48b0e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.293879 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e5a806-cf0a-4149-81d7-803170a48b0e-config-data\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.293939 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv5pk\" (UniqueName: \"kubernetes.io/projected/a9e5a806-cf0a-4149-81d7-803170a48b0e-kube-api-access-jv5pk\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.293961 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e5a806-cf0a-4149-81d7-803170a48b0e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.294025 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a9e5a806-cf0a-4149-81d7-803170a48b0e-kolla-config\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.398870 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv5pk\" (UniqueName: \"kubernetes.io/projected/a9e5a806-cf0a-4149-81d7-803170a48b0e-kube-api-access-jv5pk\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.398965 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e5a806-cf0a-4149-81d7-803170a48b0e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.399077 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a9e5a806-cf0a-4149-81d7-803170a48b0e-kolla-config\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.399135 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9e5a806-cf0a-4149-81d7-803170a48b0e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.399973 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e5a806-cf0a-4149-81d7-803170a48b0e-config-data\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.401149 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e5a806-cf0a-4149-81d7-803170a48b0e-config-data\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.403162 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a9e5a806-cf0a-4149-81d7-803170a48b0e-kolla-config\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.410305 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e5a806-cf0a-4149-81d7-803170a48b0e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.423879 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9e5a806-cf0a-4149-81d7-803170a48b0e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.441776 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv5pk\" (UniqueName: \"kubernetes.io/projected/a9e5a806-cf0a-4149-81d7-803170a48b0e-kube-api-access-jv5pk\") pod \"memcached-0\" (UID: \"a9e5a806-cf0a-4149-81d7-803170a48b0e\") " pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.583225 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 08:39:48 crc kubenswrapper[4992]: I1211 08:39:48.729870 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 08:39:49 crc kubenswrapper[4992]: I1211 08:39:49.488802 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:49 crc kubenswrapper[4992]: I1211 08:39:49.489231 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:49 crc kubenswrapper[4992]: I1211 08:39:49.536148 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:49 crc kubenswrapper[4992]: I1211 08:39:49.875828 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-g9552" podUID="d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 08:39:50 crc kubenswrapper[4992]: I1211 08:39:50.240854 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:39:51 crc kubenswrapper[4992]: I1211 08:39:51.182665 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 08:39:51 crc kubenswrapper[4992]: I1211 08:39:51.183612 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 08:39:51 crc kubenswrapper[4992]: I1211 08:39:51.185897 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hx8gm" Dec 11 08:39:51 crc kubenswrapper[4992]: I1211 08:39:51.196914 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 08:39:51 crc kubenswrapper[4992]: I1211 08:39:51.249074 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szfwz\" (UniqueName: \"kubernetes.io/projected/f99cf716-c024-485a-8d47-20218de1cb10-kube-api-access-szfwz\") pod \"kube-state-metrics-0\" (UID: \"f99cf716-c024-485a-8d47-20218de1cb10\") " pod="openstack/kube-state-metrics-0" Dec 11 08:39:51 crc kubenswrapper[4992]: I1211 08:39:51.348371 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5prz"] Dec 11 08:39:51 crc kubenswrapper[4992]: I1211 08:39:51.350826 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szfwz\" (UniqueName: \"kubernetes.io/projected/f99cf716-c024-485a-8d47-20218de1cb10-kube-api-access-szfwz\") pod \"kube-state-metrics-0\" (UID: \"f99cf716-c024-485a-8d47-20218de1cb10\") " pod="openstack/kube-state-metrics-0" Dec 11 08:39:51 crc kubenswrapper[4992]: I1211 08:39:51.371552 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szfwz\" (UniqueName: \"kubernetes.io/projected/f99cf716-c024-485a-8d47-20218de1cb10-kube-api-access-szfwz\") pod \"kube-state-metrics-0\" (UID: \"f99cf716-c024-485a-8d47-20218de1cb10\") " pod="openstack/kube-state-metrics-0" Dec 11 08:39:51 crc kubenswrapper[4992]: I1211 08:39:51.528061 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.219985 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j5prz" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerName="registry-server" containerID="cri-o://e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681" gracePeriod=2 Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.466814 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.466875 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.527088 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.904953 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.906987 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.911271 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.912066 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.912142 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.912273 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-php2n" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.912366 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.930926 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.986789 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51bf698-2728-4a49-b7e1-d80c304725e2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.986855 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51bf698-2728-4a49-b7e1-d80c304725e2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.986876 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c51bf698-2728-4a49-b7e1-d80c304725e2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.987045 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c51bf698-2728-4a49-b7e1-d80c304725e2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.987333 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51bf698-2728-4a49-b7e1-d80c304725e2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.987370 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2l99\" (UniqueName: \"kubernetes.io/projected/c51bf698-2728-4a49-b7e1-d80c304725e2-kube-api-access-c2l99\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.987418 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:52 crc kubenswrapper[4992]: I1211 08:39:52.987438 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c51bf698-2728-4a49-b7e1-d80c304725e2-config\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.090368 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c51bf698-2728-4a49-b7e1-d80c304725e2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.090542 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51bf698-2728-4a49-b7e1-d80c304725e2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.090570 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2l99\" (UniqueName: \"kubernetes.io/projected/c51bf698-2728-4a49-b7e1-d80c304725e2-kube-api-access-c2l99\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.090673 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.090753 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c51bf698-2728-4a49-b7e1-d80c304725e2-config\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.090955 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51bf698-2728-4a49-b7e1-d80c304725e2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.091058 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c51bf698-2728-4a49-b7e1-d80c304725e2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.091173 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51bf698-2728-4a49-b7e1-d80c304725e2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.093432 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.094016 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c51bf698-2728-4a49-b7e1-d80c304725e2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.094984 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c51bf698-2728-4a49-b7e1-d80c304725e2-config\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.095824 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c51bf698-2728-4a49-b7e1-d80c304725e2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.106167 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51bf698-2728-4a49-b7e1-d80c304725e2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.106197 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c51bf698-2728-4a49-b7e1-d80c304725e2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.106828 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c51bf698-2728-4a49-b7e1-d80c304725e2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.118121 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2l99\" (UniqueName: \"kubernetes.io/projected/c51bf698-2728-4a49-b7e1-d80c304725e2-kube-api-access-c2l99\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.123282 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c51bf698-2728-4a49-b7e1-d80c304725e2\") " pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.224334 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.230720 4992 generic.go:334] "Generic (PLEG): container finished" podID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerID="e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681" exitCode=0 Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.231523 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5prz" event={"ID":"d948d471-7cda-4f09-a493-f28c8fb7f439","Type":"ContainerDied","Data":"e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681"} Dec 11 08:39:53 crc kubenswrapper[4992]: I1211 08:39:53.276548 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.144265 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-djp6h"] Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.145399 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.148561 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.149125 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.153928 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-8kf88" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.159722 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-sw28r"] Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.162445 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.169974 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djp6h"] Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.175994 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sw28r"] Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212298 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b8eb34-f000-49af-bcf9-7507f85afd2b-ovn-controller-tls-certs\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212358 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9psr\" (UniqueName: \"kubernetes.io/projected/43b8eb34-f000-49af-bcf9-7507f85afd2b-kube-api-access-z9psr\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212398 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-var-run\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212423 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjzsj\" (UniqueName: \"kubernetes.io/projected/9698b65a-4246-466e-aac8-e7fe29c4063d-kube-api-access-bjzsj\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212465 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9698b65a-4246-466e-aac8-e7fe29c4063d-scripts\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212490 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43b8eb34-f000-49af-bcf9-7507f85afd2b-scripts\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212511 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/43b8eb34-f000-49af-bcf9-7507f85afd2b-var-run-ovn\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212545 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/43b8eb34-f000-49af-bcf9-7507f85afd2b-var-log-ovn\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212590 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-var-log\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212627 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-etc-ovs\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212686 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/43b8eb34-f000-49af-bcf9-7507f85afd2b-var-run\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212719 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-var-lib\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.212742 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b8eb34-f000-49af-bcf9-7507f85afd2b-combined-ca-bundle\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314655 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-var-log\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314714 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-etc-ovs\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314754 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/43b8eb34-f000-49af-bcf9-7507f85afd2b-var-run\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314780 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-var-lib\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314800 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b8eb34-f000-49af-bcf9-7507f85afd2b-combined-ca-bundle\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314887 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b8eb34-f000-49af-bcf9-7507f85afd2b-ovn-controller-tls-certs\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314915 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9psr\" (UniqueName: \"kubernetes.io/projected/43b8eb34-f000-49af-bcf9-7507f85afd2b-kube-api-access-z9psr\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314942 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjzsj\" (UniqueName: \"kubernetes.io/projected/9698b65a-4246-466e-aac8-e7fe29c4063d-kube-api-access-bjzsj\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314963 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-var-run\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.314994 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9698b65a-4246-466e-aac8-e7fe29c4063d-scripts\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.315019 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43b8eb34-f000-49af-bcf9-7507f85afd2b-scripts\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.315037 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/43b8eb34-f000-49af-bcf9-7507f85afd2b-var-run-ovn\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.315067 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/43b8eb34-f000-49af-bcf9-7507f85afd2b-var-log-ovn\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.315203 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-var-log\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.315257 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-etc-ovs\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.315343 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/43b8eb34-f000-49af-bcf9-7507f85afd2b-var-run\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.315461 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-var-lib\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.316123 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9698b65a-4246-466e-aac8-e7fe29c4063d-var-run\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.316263 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/43b8eb34-f000-49af-bcf9-7507f85afd2b-var-log-ovn\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.316410 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/43b8eb34-f000-49af-bcf9-7507f85afd2b-var-run-ovn\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.318365 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9698b65a-4246-466e-aac8-e7fe29c4063d-scripts\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.319026 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43b8eb34-f000-49af-bcf9-7507f85afd2b-combined-ca-bundle\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.319788 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/43b8eb34-f000-49af-bcf9-7507f85afd2b-scripts\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.327445 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/43b8eb34-f000-49af-bcf9-7507f85afd2b-ovn-controller-tls-certs\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.335006 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9psr\" (UniqueName: \"kubernetes.io/projected/43b8eb34-f000-49af-bcf9-7507f85afd2b-kube-api-access-z9psr\") pod \"ovn-controller-djp6h\" (UID: \"43b8eb34-f000-49af-bcf9-7507f85afd2b\") " pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.335328 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjzsj\" (UniqueName: \"kubernetes.io/projected/9698b65a-4246-466e-aac8-e7fe29c4063d-kube-api-access-bjzsj\") pod \"ovn-controller-ovs-sw28r\" (UID: \"9698b65a-4246-466e-aac8-e7fe29c4063d\") " pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.465132 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djp6h" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.483435 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:39:54 crc kubenswrapper[4992]: I1211 08:39:54.944161 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxrcn"] Dec 11 08:39:55 crc kubenswrapper[4992]: I1211 08:39:55.243800 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fxrcn" podUID="6ac3833a-d349-476d-9759-28824233d07a" containerName="registry-server" containerID="cri-o://362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091" gracePeriod=2 Dec 11 08:39:56 crc kubenswrapper[4992]: W1211 08:39:56.920539 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c5eb79c_8f1c_4416_ab38_00b67e0b3f86.slice/crio-52954a3a404117ff9e427f9896a5a9795ce72ec6c41822962a95dd852ab1d055 WatchSource:0}: Error finding container 52954a3a404117ff9e427f9896a5a9795ce72ec6c41822962a95dd852ab1d055: Status 404 returned error can't find the container with id 52954a3a404117ff9e427f9896a5a9795ce72ec6c41822962a95dd852ab1d055 Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.262602 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86","Type":"ContainerStarted","Data":"52954a3a404117ff9e427f9896a5a9795ce72ec6c41822962a95dd852ab1d055"} Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.266981 4992 generic.go:334] "Generic (PLEG): container finished" podID="6ac3833a-d349-476d-9759-28824233d07a" containerID="362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091" exitCode=0 Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.267022 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxrcn" event={"ID":"6ac3833a-d349-476d-9759-28824233d07a","Type":"ContainerDied","Data":"362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091"} Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.456594 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.459764 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.466042 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.466121 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-sq8dj" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.466434 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.466518 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.473539 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.581994 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a84aae8d-da28-42b4-80a4-99e157fb57ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.582048 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84aae8d-da28-42b4-80a4-99e157fb57ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.582090 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.582123 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4czm9\" (UniqueName: \"kubernetes.io/projected/a84aae8d-da28-42b4-80a4-99e157fb57ec-kube-api-access-4czm9\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.582180 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84aae8d-da28-42b4-80a4-99e157fb57ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.582318 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a84aae8d-da28-42b4-80a4-99e157fb57ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.582430 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84aae8d-da28-42b4-80a4-99e157fb57ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.582465 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84aae8d-da28-42b4-80a4-99e157fb57ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.683656 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a84aae8d-da28-42b4-80a4-99e157fb57ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.683748 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84aae8d-da28-42b4-80a4-99e157fb57ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.683776 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84aae8d-da28-42b4-80a4-99e157fb57ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.683826 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a84aae8d-da28-42b4-80a4-99e157fb57ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.683869 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84aae8d-da28-42b4-80a4-99e157fb57ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.683893 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.683917 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4czm9\" (UniqueName: \"kubernetes.io/projected/a84aae8d-da28-42b4-80a4-99e157fb57ec-kube-api-access-4czm9\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.683959 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84aae8d-da28-42b4-80a4-99e157fb57ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.685032 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a84aae8d-da28-42b4-80a4-99e157fb57ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.685294 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a84aae8d-da28-42b4-80a4-99e157fb57ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.686599 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.689913 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a84aae8d-da28-42b4-80a4-99e157fb57ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.695292 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84aae8d-da28-42b4-80a4-99e157fb57ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.695690 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84aae8d-da28-42b4-80a4-99e157fb57ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.695994 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a84aae8d-da28-42b4-80a4-99e157fb57ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.707303 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4czm9\" (UniqueName: \"kubernetes.io/projected/a84aae8d-da28-42b4-80a4-99e157fb57ec-kube-api-access-4czm9\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.711381 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a84aae8d-da28-42b4-80a4-99e157fb57ec\") " pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:57 crc kubenswrapper[4992]: I1211 08:39:57.780794 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 08:39:58 crc kubenswrapper[4992]: I1211 08:39:58.161861 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:39:58 crc kubenswrapper[4992]: I1211 08:39:58.552061 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v8wmm"] Dec 11 08:39:58 crc kubenswrapper[4992]: I1211 08:39:58.552335 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v8wmm" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="registry-server" containerID="cri-o://941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" gracePeriod=2 Dec 11 08:39:59 crc kubenswrapper[4992]: E1211 08:39:59.490164 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681 is running failed: container process not found" containerID="e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:39:59 crc kubenswrapper[4992]: E1211 08:39:59.490475 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681 is running failed: container process not found" containerID="e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:39:59 crc kubenswrapper[4992]: E1211 08:39:59.490757 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681 is running failed: container process not found" containerID="e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:39:59 crc kubenswrapper[4992]: E1211 08:39:59.490784 4992 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-j5prz" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerName="registry-server" Dec 11 08:40:02 crc kubenswrapper[4992]: E1211 08:40:02.467401 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091 is running failed: container process not found" containerID="362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:02 crc kubenswrapper[4992]: E1211 08:40:02.468413 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091 is running failed: container process not found" containerID="362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:02 crc kubenswrapper[4992]: E1211 08:40:02.468749 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091 is running failed: container process not found" containerID="362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:02 crc kubenswrapper[4992]: E1211 08:40:02.468780 4992 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-fxrcn" podUID="6ac3833a-d349-476d-9759-28824233d07a" containerName="registry-server" Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.328474 4992 generic.go:334] "Generic (PLEG): container finished" podID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" exitCode=0 Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.328522 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8wmm" event={"ID":"3cf89354-d5eb-4100-9972-a1ee48f0f123","Type":"ContainerDied","Data":"941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648"} Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.585076 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.710977 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-catalog-content\") pod \"d948d471-7cda-4f09-a493-f28c8fb7f439\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.711125 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-utilities\") pod \"d948d471-7cda-4f09-a493-f28c8fb7f439\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.711194 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6vv6\" (UniqueName: \"kubernetes.io/projected/d948d471-7cda-4f09-a493-f28c8fb7f439-kube-api-access-n6vv6\") pod \"d948d471-7cda-4f09-a493-f28c8fb7f439\" (UID: \"d948d471-7cda-4f09-a493-f28c8fb7f439\") " Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.712456 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-utilities" (OuterVolumeSpecName: "utilities") pod "d948d471-7cda-4f09-a493-f28c8fb7f439" (UID: "d948d471-7cda-4f09-a493-f28c8fb7f439"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.716812 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d948d471-7cda-4f09-a493-f28c8fb7f439-kube-api-access-n6vv6" (OuterVolumeSpecName: "kube-api-access-n6vv6") pod "d948d471-7cda-4f09-a493-f28c8fb7f439" (UID: "d948d471-7cda-4f09-a493-f28c8fb7f439"). InnerVolumeSpecName "kube-api-access-n6vv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.732074 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d948d471-7cda-4f09-a493-f28c8fb7f439" (UID: "d948d471-7cda-4f09-a493-f28c8fb7f439"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.813016 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6vv6\" (UniqueName: \"kubernetes.io/projected/d948d471-7cda-4f09-a493-f28c8fb7f439-kube-api-access-n6vv6\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.813061 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:04 crc kubenswrapper[4992]: I1211 08:40:04.813070 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d948d471-7cda-4f09-a493-f28c8fb7f439-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:05 crc kubenswrapper[4992]: I1211 08:40:05.340865 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j5prz" event={"ID":"d948d471-7cda-4f09-a493-f28c8fb7f439","Type":"ContainerDied","Data":"cf1938675dfff08699351e9e021c9c2723d5881ecc5811fcf3b812e54c6927be"} Dec 11 08:40:05 crc kubenswrapper[4992]: I1211 08:40:05.340943 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j5prz" Dec 11 08:40:05 crc kubenswrapper[4992]: I1211 08:40:05.340974 4992 scope.go:117] "RemoveContainer" containerID="e394f5300137ed41b80cc3a71d4bb659fcd4cfc25c480a4e8bb85b96a8b3f681" Dec 11 08:40:05 crc kubenswrapper[4992]: I1211 08:40:05.372957 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5prz"] Dec 11 08:40:05 crc kubenswrapper[4992]: I1211 08:40:05.379854 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j5prz"] Dec 11 08:40:06 crc kubenswrapper[4992]: I1211 08:40:06.123783 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" path="/var/lib/kubelet/pods/d948d471-7cda-4f09-a493-f28c8fb7f439/volumes" Dec 11 08:40:08 crc kubenswrapper[4992]: E1211 08:40:08.083808 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:08 crc kubenswrapper[4992]: E1211 08:40:08.084223 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:08 crc kubenswrapper[4992]: E1211 08:40:08.084921 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:08 crc kubenswrapper[4992]: E1211 08:40:08.084956 4992 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-v8wmm" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="registry-server" Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.487834 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.627765 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-utilities\") pod \"6ac3833a-d349-476d-9759-28824233d07a\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.627887 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frjvw\" (UniqueName: \"kubernetes.io/projected/6ac3833a-d349-476d-9759-28824233d07a-kube-api-access-frjvw\") pod \"6ac3833a-d349-476d-9759-28824233d07a\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.627962 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-catalog-content\") pod \"6ac3833a-d349-476d-9759-28824233d07a\" (UID: \"6ac3833a-d349-476d-9759-28824233d07a\") " Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.628839 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-utilities" (OuterVolumeSpecName: "utilities") pod "6ac3833a-d349-476d-9759-28824233d07a" (UID: "6ac3833a-d349-476d-9759-28824233d07a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.637897 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac3833a-d349-476d-9759-28824233d07a-kube-api-access-frjvw" (OuterVolumeSpecName: "kube-api-access-frjvw") pod "6ac3833a-d349-476d-9759-28824233d07a" (UID: "6ac3833a-d349-476d-9759-28824233d07a"). InnerVolumeSpecName "kube-api-access-frjvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.709624 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ac3833a-d349-476d-9759-28824233d07a" (UID: "6ac3833a-d349-476d-9759-28824233d07a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.729385 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.729412 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frjvw\" (UniqueName: \"kubernetes.io/projected/6ac3833a-d349-476d-9759-28824233d07a-kube-api-access-frjvw\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:10 crc kubenswrapper[4992]: I1211 08:40:10.729421 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac3833a-d349-476d-9759-28824233d07a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:11 crc kubenswrapper[4992]: I1211 08:40:11.392993 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fxrcn" event={"ID":"6ac3833a-d349-476d-9759-28824233d07a","Type":"ContainerDied","Data":"697f3f6c361e1f1c9f6f71a0894b7be5b5e41d6fdd1a35d756a6b1992a35ba22"} Dec 11 08:40:11 crc kubenswrapper[4992]: I1211 08:40:11.393039 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fxrcn" Dec 11 08:40:11 crc kubenswrapper[4992]: I1211 08:40:11.461131 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fxrcn"] Dec 11 08:40:11 crc kubenswrapper[4992]: I1211 08:40:11.470713 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fxrcn"] Dec 11 08:40:12 crc kubenswrapper[4992]: I1211 08:40:12.107191 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac3833a-d349-476d-9759-28824233d07a" path="/var/lib/kubelet/pods/6ac3833a-d349-476d-9759-28824233d07a/volumes" Dec 11 08:40:13 crc kubenswrapper[4992]: I1211 08:40:13.883565 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djp6h"] Dec 11 08:40:17 crc kubenswrapper[4992]: E1211 08:40:17.678225 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 11 08:40:17 crc kubenswrapper[4992]: E1211 08:40:17.678860 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmbzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(72f9411b-61f4-4615-8653-5f90b629690d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:17 crc kubenswrapper[4992]: E1211 08:40:17.680227 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="72f9411b-61f4-4615-8653-5f90b629690d" Dec 11 08:40:18 crc kubenswrapper[4992]: E1211 08:40:18.083347 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:18 crc kubenswrapper[4992]: E1211 08:40:18.084050 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:18 crc kubenswrapper[4992]: E1211 08:40:18.084726 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:18 crc kubenswrapper[4992]: E1211 08:40:18.084770 4992 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-v8wmm" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="registry-server" Dec 11 08:40:20 crc kubenswrapper[4992]: E1211 08:40:20.912696 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="72f9411b-61f4-4615-8653-5f90b629690d" Dec 11 08:40:21 crc kubenswrapper[4992]: I1211 08:40:21.343585 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 08:40:24 crc kubenswrapper[4992]: E1211 08:40:24.599580 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 11 08:40:24 crc kubenswrapper[4992]: E1211 08:40:24.600068 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fsfb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(0c5eb79c-8f1c-4416-ab38-00b67e0b3f86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:24 crc kubenswrapper[4992]: E1211 08:40:24.601307 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="0c5eb79c-8f1c-4416-ab38-00b67e0b3f86" Dec 11 08:40:24 crc kubenswrapper[4992]: E1211 08:40:24.607138 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 11 08:40:24 crc kubenswrapper[4992]: E1211 08:40:24.607299 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltmhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:24 crc kubenswrapper[4992]: E1211 08:40:24.608555 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67" Dec 11 08:40:25 crc kubenswrapper[4992]: E1211 08:40:25.502910 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67" Dec 11 08:40:25 crc kubenswrapper[4992]: E1211 08:40:25.503043 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="0c5eb79c-8f1c-4416-ab38-00b67e0b3f86" Dec 11 08:40:28 crc kubenswrapper[4992]: E1211 08:40:28.083584 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:28 crc kubenswrapper[4992]: E1211 08:40:28.084302 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:28 crc kubenswrapper[4992]: E1211 08:40:28.084866 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 08:40:28 crc kubenswrapper[4992]: E1211 08:40:28.084894 4992 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-v8wmm" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="registry-server" Dec 11 08:40:29 crc kubenswrapper[4992]: I1211 08:40:29.871858 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-splw9" podUID="34a79bdc-5774-468d-9136-9e03be822975" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 08:40:36 crc kubenswrapper[4992]: W1211 08:40:36.500106 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9e5a806_cf0a_4149_81d7_803170a48b0e.slice/crio-407404c57b153a0f40f7dbaf7f54ea483290a0b20fdc6804717ceec8f56b7f51 WatchSource:0}: Error finding container 407404c57b153a0f40f7dbaf7f54ea483290a0b20fdc6804717ceec8f56b7f51: Status 404 returned error can't find the container with id 407404c57b153a0f40f7dbaf7f54ea483290a0b20fdc6804717ceec8f56b7f51 Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.510238 4992 scope.go:117] "RemoveContainer" containerID="9f25a2c428aeb958a256686ed2897765cd85a3e812a93b45dc423b275295517e" Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.561158 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.603768 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-catalog-content\") pod \"3cf89354-d5eb-4100-9972-a1ee48f0f123\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.604019 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njlk9\" (UniqueName: \"kubernetes.io/projected/3cf89354-d5eb-4100-9972-a1ee48f0f123-kube-api-access-njlk9\") pod \"3cf89354-d5eb-4100-9972-a1ee48f0f123\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.604064 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-utilities\") pod \"3cf89354-d5eb-4100-9972-a1ee48f0f123\" (UID: \"3cf89354-d5eb-4100-9972-a1ee48f0f123\") " Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.608756 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-utilities" (OuterVolumeSpecName: "utilities") pod "3cf89354-d5eb-4100-9972-a1ee48f0f123" (UID: "3cf89354-d5eb-4100-9972-a1ee48f0f123"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.609610 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.627095 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf89354-d5eb-4100-9972-a1ee48f0f123-kube-api-access-njlk9" (OuterVolumeSpecName: "kube-api-access-njlk9") pod "3cf89354-d5eb-4100-9972-a1ee48f0f123" (UID: "3cf89354-d5eb-4100-9972-a1ee48f0f123"). InnerVolumeSpecName "kube-api-access-njlk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.633872 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v8wmm" event={"ID":"3cf89354-d5eb-4100-9972-a1ee48f0f123","Type":"ContainerDied","Data":"bf6d5b679cf799f859fa68450fedc8703a43e019cdb01db4cc9dd0e983cce30c"} Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.633828 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v8wmm" Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.638359 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a9e5a806-cf0a-4149-81d7-803170a48b0e","Type":"ContainerStarted","Data":"407404c57b153a0f40f7dbaf7f54ea483290a0b20fdc6804717ceec8f56b7f51"} Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.641671 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djp6h" event={"ID":"43b8eb34-f000-49af-bcf9-7507f85afd2b","Type":"ContainerStarted","Data":"0f8691a39b22828412f46a5cf8beb4ca842b8a702875c9a7c16cbc48d1f14fcc"} Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.667755 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cf89354-d5eb-4100-9972-a1ee48f0f123" (UID: "3cf89354-d5eb-4100-9972-a1ee48f0f123"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.710681 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf89354-d5eb-4100-9972-a1ee48f0f123-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.710836 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njlk9\" (UniqueName: \"kubernetes.io/projected/3cf89354-d5eb-4100-9972-a1ee48f0f123-kube-api-access-njlk9\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.935238 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.970922 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v8wmm"] Dec 11 08:40:36 crc kubenswrapper[4992]: I1211 08:40:36.976605 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v8wmm"] Dec 11 08:40:37 crc kubenswrapper[4992]: I1211 08:40:37.710941 4992 scope.go:117] "RemoveContainer" containerID="9d2339ffa603209a57a3aa82d10714bbf633ce82f4cc2c1f0ef2a4f37ed411ed" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.724786 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.725209 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qxljl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-qjhqv_openstack(e3b993d2-9421-48c0-b4fc-b9eef93186f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.734791 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.734955 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv6sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-ckhzn_openstack(2fc10a15-b026-4617-b650-5eb0f8af0299): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.735957 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.736034 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bgqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-6clvs_openstack(8325fbe7-218b-4d83-9973-6967fa0a726c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.736115 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" podUID="2fc10a15-b026-4617-b650-5eb0f8af0299" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.736392 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" podUID="e3b993d2-9421-48c0-b4fc-b9eef93186f4" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.737208 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" podUID="8325fbe7-218b-4d83-9973-6967fa0a726c" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.764755 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.764900 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz727,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-gzbq4_openstack(9db5376c-59e4-4552-9e55-cc6f9df24e5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:37 crc kubenswrapper[4992]: E1211 08:40:37.766058 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" podUID="9db5376c-59e4-4552-9e55-cc6f9df24e5f" Dec 11 08:40:37 crc kubenswrapper[4992]: I1211 08:40:37.889834 4992 scope.go:117] "RemoveContainer" containerID="362227ff6cf47bb1f7a246a042d19980377baab87d544f8047e9b0ae30674091" Dec 11 08:40:37 crc kubenswrapper[4992]: I1211 08:40:37.942422 4992 scope.go:117] "RemoveContainer" containerID="c8962fcd937a33dab796040b7988f0292bf0f06a7465d55df63c94814dbafcc7" Dec 11 08:40:37 crc kubenswrapper[4992]: I1211 08:40:37.968613 4992 scope.go:117] "RemoveContainer" containerID="cde6e8e41e2f0b69441907cac064da1d5815927c8a5f05ba8ed96675d2b0c0d6" Dec 11 08:40:37 crc kubenswrapper[4992]: I1211 08:40:37.984870 4992 scope.go:117] "RemoveContainer" containerID="941a65f3e0028c707cebff2f4d394edd4da3cf90e09c3ee5fb864f2878ab6648" Dec 11 08:40:37 crc kubenswrapper[4992]: I1211 08:40:37.998207 4992 scope.go:117] "RemoveContainer" containerID="bf082b285a3ed56b0ef085d1e12c55d1db74418ecf93b0fb92be3c4f7707992d" Dec 11 08:40:38 crc kubenswrapper[4992]: I1211 08:40:38.025481 4992 scope.go:117] "RemoveContainer" containerID="7a388fe431d05185c85a67ec290b4888439476ba495b8656797163fe1a0de247" Dec 11 08:40:38 crc kubenswrapper[4992]: I1211 08:40:38.111473 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" path="/var/lib/kubelet/pods/3cf89354-d5eb-4100-9972-a1ee48f0f123/volumes" Dec 11 08:40:38 crc kubenswrapper[4992]: I1211 08:40:38.288602 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 08:40:38 crc kubenswrapper[4992]: W1211 08:40:38.296290 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc51bf698_2728_4a49_b7e1_d80c304725e2.slice/crio-f0c002b7a662f371e3d1d998bc9729764e7ce3d5ae4cd03abc37feba08b1e4a1 WatchSource:0}: Error finding container f0c002b7a662f371e3d1d998bc9729764e7ce3d5ae4cd03abc37feba08b1e4a1: Status 404 returned error can't find the container with id f0c002b7a662f371e3d1d998bc9729764e7ce3d5ae4cd03abc37feba08b1e4a1 Dec 11 08:40:38 crc kubenswrapper[4992]: I1211 08:40:38.365914 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 08:40:38 crc kubenswrapper[4992]: W1211 08:40:38.465792 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda84aae8d_da28_42b4_80a4_99e157fb57ec.slice/crio-d5e3b3ef833d90de4d0115e914eb4c0f7652fa98230209e3780f0249679872ff WatchSource:0}: Error finding container d5e3b3ef833d90de4d0115e914eb4c0f7652fa98230209e3780f0249679872ff: Status 404 returned error can't find the container with id d5e3b3ef833d90de4d0115e914eb4c0f7652fa98230209e3780f0249679872ff Dec 11 08:40:38 crc kubenswrapper[4992]: I1211 08:40:38.658486 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a84aae8d-da28-42b4-80a4-99e157fb57ec","Type":"ContainerStarted","Data":"d5e3b3ef833d90de4d0115e914eb4c0f7652fa98230209e3780f0249679872ff"} Dec 11 08:40:38 crc kubenswrapper[4992]: I1211 08:40:38.660252 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f99cf716-c024-485a-8d47-20218de1cb10","Type":"ContainerStarted","Data":"9ef4f23e1481705b5ec964be07baae3c2ebaa1b6bb97a059dad3b9a78e52b9df"} Dec 11 08:40:38 crc kubenswrapper[4992]: I1211 08:40:38.670153 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c51bf698-2728-4a49-b7e1-d80c304725e2","Type":"ContainerStarted","Data":"f0c002b7a662f371e3d1d998bc9729764e7ce3d5ae4cd03abc37feba08b1e4a1"} Dec 11 08:40:38 crc kubenswrapper[4992]: E1211 08:40:38.673559 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" podUID="e3b993d2-9421-48c0-b4fc-b9eef93186f4" Dec 11 08:40:38 crc kubenswrapper[4992]: E1211 08:40:38.673974 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" podUID="2fc10a15-b026-4617-b650-5eb0f8af0299" Dec 11 08:40:38 crc kubenswrapper[4992]: I1211 08:40:38.980727 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-sw28r"] Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.152175 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.164800 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.277834 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8325fbe7-218b-4d83-9973-6967fa0a726c-config\") pod \"8325fbe7-218b-4d83-9973-6967fa0a726c\" (UID: \"8325fbe7-218b-4d83-9973-6967fa0a726c\") " Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.277920 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-dns-svc\") pod \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.278019 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-config\") pod \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.278049 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz727\" (UniqueName: \"kubernetes.io/projected/9db5376c-59e4-4552-9e55-cc6f9df24e5f-kube-api-access-kz727\") pod \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\" (UID: \"9db5376c-59e4-4552-9e55-cc6f9df24e5f\") " Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.278122 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bgqh\" (UniqueName: \"kubernetes.io/projected/8325fbe7-218b-4d83-9973-6967fa0a726c-kube-api-access-5bgqh\") pod \"8325fbe7-218b-4d83-9973-6967fa0a726c\" (UID: \"8325fbe7-218b-4d83-9973-6967fa0a726c\") " Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.278757 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8325fbe7-218b-4d83-9973-6967fa0a726c-config" (OuterVolumeSpecName: "config") pod "8325fbe7-218b-4d83-9973-6967fa0a726c" (UID: "8325fbe7-218b-4d83-9973-6967fa0a726c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.278825 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-config" (OuterVolumeSpecName: "config") pod "9db5376c-59e4-4552-9e55-cc6f9df24e5f" (UID: "9db5376c-59e4-4552-9e55-cc6f9df24e5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.279833 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9db5376c-59e4-4552-9e55-cc6f9df24e5f" (UID: "9db5376c-59e4-4552-9e55-cc6f9df24e5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.286441 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8325fbe7-218b-4d83-9973-6967fa0a726c-kube-api-access-5bgqh" (OuterVolumeSpecName: "kube-api-access-5bgqh") pod "8325fbe7-218b-4d83-9973-6967fa0a726c" (UID: "8325fbe7-218b-4d83-9973-6967fa0a726c"). InnerVolumeSpecName "kube-api-access-5bgqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.292857 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db5376c-59e4-4552-9e55-cc6f9df24e5f-kube-api-access-kz727" (OuterVolumeSpecName: "kube-api-access-kz727") pod "9db5376c-59e4-4552-9e55-cc6f9df24e5f" (UID: "9db5376c-59e4-4552-9e55-cc6f9df24e5f"). InnerVolumeSpecName "kube-api-access-kz727". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.380011 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bgqh\" (UniqueName: \"kubernetes.io/projected/8325fbe7-218b-4d83-9973-6967fa0a726c-kube-api-access-5bgqh\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.380044 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8325fbe7-218b-4d83-9973-6967fa0a726c-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.380054 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.380062 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9db5376c-59e4-4552-9e55-cc6f9df24e5f-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.380070 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz727\" (UniqueName: \"kubernetes.io/projected/9db5376c-59e4-4552-9e55-cc6f9df24e5f-kube-api-access-kz727\") on node \"crc\" DevicePath \"\"" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.680697 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72f9411b-61f4-4615-8653-5f90b629690d","Type":"ContainerStarted","Data":"daa8d142bc905225839350387f54d5d85e7e63e3eae6f27da2901a157fc2ea72"} Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.684314 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a0fa5ac-9268-4db9-8e40-42aca5111af9","Type":"ContainerStarted","Data":"259359c8e6c2d1faf101b6f5d0f1887b1de6ea4e766412b2cf6cbd8f9fc64fb4"} Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.710006 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" event={"ID":"9db5376c-59e4-4552-9e55-cc6f9df24e5f","Type":"ContainerDied","Data":"7b741b883b63595e0b55da11bec58f51c52b2a6a6ee6c67fd5ed90621340582c"} Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.712103 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gzbq4" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.713432 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sw28r" event={"ID":"9698b65a-4246-466e-aac8-e7fe29c4063d","Type":"ContainerStarted","Data":"b2fec9fc53c959c2ca7ff4ce90c1086ccd611beb8a41a10018633d4b7b9b789c"} Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.717458 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" event={"ID":"8325fbe7-218b-4d83-9973-6967fa0a726c","Type":"ContainerDied","Data":"7ce20d9b7e8259815b9863e5cdbb9da5f17c6c30eb20ef32bf30442a508f568f"} Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.717554 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-6clvs" Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.837818 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6clvs"] Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.845479 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-6clvs"] Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.852691 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gzbq4"] Dec 11 08:40:39 crc kubenswrapper[4992]: I1211 08:40:39.857500 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gzbq4"] Dec 11 08:40:40 crc kubenswrapper[4992]: I1211 08:40:40.107257 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8325fbe7-218b-4d83-9973-6967fa0a726c" path="/var/lib/kubelet/pods/8325fbe7-218b-4d83-9973-6967fa0a726c/volumes" Dec 11 08:40:40 crc kubenswrapper[4992]: I1211 08:40:40.108281 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db5376c-59e4-4552-9e55-cc6f9df24e5f" path="/var/lib/kubelet/pods/9db5376c-59e4-4552-9e55-cc6f9df24e5f/volumes" Dec 11 08:40:55 crc kubenswrapper[4992]: E1211 08:40:55.980483 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 11 08:40:55 crc kubenswrapper[4992]: E1211 08:40:55.981259 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fsfb8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(0c5eb79c-8f1c-4416-ab38-00b67e0b3f86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:55 crc kubenswrapper[4992]: E1211 08:40:55.982414 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="0c5eb79c-8f1c-4416-ab38-00b67e0b3f86" Dec 11 08:40:56 crc kubenswrapper[4992]: E1211 08:40:56.593608 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 11 08:40:56 crc kubenswrapper[4992]: E1211 08:40:56.594538 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n57bh564h676h5bbh5dbh598h665h568h5b8h594h8bh5c4hbfh5d4h78hc4h74h68h5b8h54h9chfbh98h98h95h668h659h58fhf9hd9h598hd7q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z9psr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-djp6h_openstack(43b8eb34-f000-49af-bcf9-7507f85afd2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:56 crc kubenswrapper[4992]: E1211 08:40:56.595785 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-djp6h" podUID="43b8eb34-f000-49af-bcf9-7507f85afd2b" Dec 11 08:40:56 crc kubenswrapper[4992]: E1211 08:40:56.858423 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-djp6h" podUID="43b8eb34-f000-49af-bcf9-7507f85afd2b" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.012504 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.013056 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67ch5b9h644h557h644h579h689h5cbh66bh59chcchc5h7bh5b9h5fh659hd7hd7h57dh66dh54fh5cch7ch99h657h5b7h95h85h66h59h645h88q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4czm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(a84aae8d-da28-42b4-80a4-99e157fb57ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454501 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qw2v9"] Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.454829 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac3833a-d349-476d-9759-28824233d07a" containerName="registry-server" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454842 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac3833a-d349-476d-9759-28824233d07a" containerName="registry-server" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.454863 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerName="registry-server" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454868 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerName="registry-server" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.454885 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerName="extract-utilities" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454891 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerName="extract-utilities" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.454902 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="extract-utilities" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454907 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="extract-utilities" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.454920 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac3833a-d349-476d-9759-28824233d07a" containerName="extract-utilities" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454926 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac3833a-d349-476d-9759-28824233d07a" containerName="extract-utilities" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.454934 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="extract-content" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454939 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="extract-content" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.454952 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="registry-server" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454957 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="registry-server" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.454969 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac3833a-d349-476d-9759-28824233d07a" containerName="extract-content" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454975 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac3833a-d349-476d-9759-28824233d07a" containerName="extract-content" Dec 11 08:40:58 crc kubenswrapper[4992]: E1211 08:40:58.454988 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerName="extract-content" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.454994 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerName="extract-content" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.455153 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac3833a-d349-476d-9759-28824233d07a" containerName="registry-server" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.455168 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="d948d471-7cda-4f09-a493-f28c8fb7f439" containerName="registry-server" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.455176 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf89354-d5eb-4100-9972-a1ee48f0f123" containerName="registry-server" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.455686 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.461003 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.475441 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qw2v9"] Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.565902 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c315d3-3609-4a88-bf95-4beedb848ecf-combined-ca-bundle\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.565953 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c315d3-3609-4a88-bf95-4beedb848ecf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.565970 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thz4x\" (UniqueName: \"kubernetes.io/projected/25c315d3-3609-4a88-bf95-4beedb848ecf-kube-api-access-thz4x\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.565996 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/25c315d3-3609-4a88-bf95-4beedb848ecf-ovs-rundir\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.566035 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/25c315d3-3609-4a88-bf95-4beedb848ecf-ovn-rundir\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.566080 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c315d3-3609-4a88-bf95-4beedb848ecf-config\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.624750 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ckhzn"] Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.667654 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c315d3-3609-4a88-bf95-4beedb848ecf-combined-ca-bundle\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.667743 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c315d3-3609-4a88-bf95-4beedb848ecf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.667773 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thz4x\" (UniqueName: \"kubernetes.io/projected/25c315d3-3609-4a88-bf95-4beedb848ecf-kube-api-access-thz4x\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.667804 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/25c315d3-3609-4a88-bf95-4beedb848ecf-ovs-rundir\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.667844 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/25c315d3-3609-4a88-bf95-4beedb848ecf-ovn-rundir\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.667900 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c315d3-3609-4a88-bf95-4beedb848ecf-config\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.668913 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c315d3-3609-4a88-bf95-4beedb848ecf-config\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.670232 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/25c315d3-3609-4a88-bf95-4beedb848ecf-ovs-rundir\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.670237 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/25c315d3-3609-4a88-bf95-4beedb848ecf-ovn-rundir\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.672127 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dh8p9"] Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.673472 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.675420 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c315d3-3609-4a88-bf95-4beedb848ecf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.675690 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.690342 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thz4x\" (UniqueName: \"kubernetes.io/projected/25c315d3-3609-4a88-bf95-4beedb848ecf-kube-api-access-thz4x\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.697942 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c315d3-3609-4a88-bf95-4beedb848ecf-combined-ca-bundle\") pod \"ovn-controller-metrics-qw2v9\" (UID: \"25c315d3-3609-4a88-bf95-4beedb848ecf\") " pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.722483 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dh8p9"] Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.769176 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-config\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.769227 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7b58\" (UniqueName: \"kubernetes.io/projected/d11380fe-b009-4ce2-a338-796c2677de3b-kube-api-access-q7b58\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.769273 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.769372 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.796591 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qw2v9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.822330 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qjhqv"] Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.848974 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7m76v"] Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.850291 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.853324 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.864181 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7m76v"] Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.870257 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.870319 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-config\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.870345 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7b58\" (UniqueName: \"kubernetes.io/projected/d11380fe-b009-4ce2-a338-796c2677de3b-kube-api-access-q7b58\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.870387 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.871371 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-config\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.871900 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.872113 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.897472 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7b58\" (UniqueName: \"kubernetes.io/projected/d11380fe-b009-4ce2-a338-796c2677de3b-kube-api-access-q7b58\") pod \"dnsmasq-dns-7fd796d7df-dh8p9\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.996999 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwm5w\" (UniqueName: \"kubernetes.io/projected/141181c2-23c2-46f1-bea8-0af18021f8ed-kube-api-access-pwm5w\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.997101 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.997195 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.997309 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:58 crc kubenswrapper[4992]: I1211 08:40:58.997386 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-config\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.046799 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.099589 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwm5w\" (UniqueName: \"kubernetes.io/projected/141181c2-23c2-46f1-bea8-0af18021f8ed-kube-api-access-pwm5w\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.100217 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.100359 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.100466 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.100558 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-config\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.101230 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.101971 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.102320 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-config\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.103542 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.122840 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwm5w\" (UniqueName: \"kubernetes.io/projected/141181c2-23c2-46f1-bea8-0af18021f8ed-kube-api-access-pwm5w\") pod \"dnsmasq-dns-86db49b7ff-7m76v\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:40:59 crc kubenswrapper[4992]: I1211 08:40:59.175560 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:41:02 crc kubenswrapper[4992]: E1211 08:41:02.584186 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Dec 11 08:41:02 crc kubenswrapper[4992]: E1211 08:41:02.584745 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n589h646h66h68h589h674h54fhc6h68dh674h65bhc6hbh567h65dh66h6ch68bh596h55fh5h55bh686h58h6hbdh5bfh89h577h5bdh657hbbq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2l99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(c51bf698-2728-4a49-b7e1-d80c304725e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:41:05 crc kubenswrapper[4992]: E1211 08:41:05.494825 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 11 08:41:05 crc kubenswrapper[4992]: E1211 08:41:05.495170 4992 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 11 08:41:05 crc kubenswrapper[4992]: E1211 08:41:05.495344 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szfwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(f99cf716-c024-485a-8d47-20218de1cb10): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:41:05 crc kubenswrapper[4992]: E1211 08:41:05.496592 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="f99cf716-c024-485a-8d47-20218de1cb10" Dec 11 08:41:05 crc kubenswrapper[4992]: E1211 08:41:05.985621 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="f99cf716-c024-485a-8d47-20218de1cb10" Dec 11 08:41:06 crc kubenswrapper[4992]: I1211 08:41:06.401662 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7m76v"] Dec 11 08:41:06 crc kubenswrapper[4992]: I1211 08:41:06.410574 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qw2v9"] Dec 11 08:41:06 crc kubenswrapper[4992]: I1211 08:41:06.533255 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dh8p9"] Dec 11 08:41:06 crc kubenswrapper[4992]: I1211 08:41:06.965743 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67","Type":"ContainerStarted","Data":"47ad442bdfdb5eee7aad192cc83ed9ab8a169f14fad66b9da17f00227e79c4f9"} Dec 11 08:41:06 crc kubenswrapper[4992]: I1211 08:41:06.968442 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a9e5a806-cf0a-4149-81d7-803170a48b0e","Type":"ContainerStarted","Data":"2b29082d210fbba4e67bbb0eba6c249ec5a950172b81e6eef4fc5c13934edec4"} Dec 11 08:41:06 crc kubenswrapper[4992]: I1211 08:41:06.968833 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 11 08:41:07 crc kubenswrapper[4992]: I1211 08:41:07.039158 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=58.345494579 podStartE2EDuration="1m19.039129472s" podCreationTimestamp="2025-12-11 08:39:48 +0000 UTC" firstStartedPulling="2025-12-11 08:40:36.510600259 +0000 UTC m=+1060.770074175" lastFinishedPulling="2025-12-11 08:40:57.204235122 +0000 UTC m=+1081.463709068" observedRunningTime="2025-12-11 08:41:07.006757777 +0000 UTC m=+1091.266231703" watchObservedRunningTime="2025-12-11 08:41:07.039129472 +0000 UTC m=+1091.298603418" Dec 11 08:41:10 crc kubenswrapper[4992]: W1211 08:41:10.186249 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd11380fe_b009_4ce2_a338_796c2677de3b.slice/crio-69052f173c43e2f6e525cc1dc21e7372fdb7776cad6eb20955c777b6ce143c81 WatchSource:0}: Error finding container 69052f173c43e2f6e525cc1dc21e7372fdb7776cad6eb20955c777b6ce143c81: Status 404 returned error can't find the container with id 69052f173c43e2f6e525cc1dc21e7372fdb7776cad6eb20955c777b6ce143c81 Dec 11 08:41:10 crc kubenswrapper[4992]: I1211 08:41:10.996365 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" event={"ID":"d11380fe-b009-4ce2-a338-796c2677de3b","Type":"ContainerStarted","Data":"69052f173c43e2f6e525cc1dc21e7372fdb7776cad6eb20955c777b6ce143c81"} Dec 11 08:41:10 crc kubenswrapper[4992]: I1211 08:41:10.997426 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qw2v9" event={"ID":"25c315d3-3609-4a88-bf95-4beedb848ecf","Type":"ContainerStarted","Data":"5defe62e8e03b8f604ada72fe00d4a58a00372e808bc5dd4555a679ade8c38ab"} Dec 11 08:41:10 crc kubenswrapper[4992]: I1211 08:41:10.998590 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" event={"ID":"141181c2-23c2-46f1-bea8-0af18021f8ed","Type":"ContainerStarted","Data":"c4b9214df682e32648c317bf5a8646808943179b850e7c7f701f672899994430"} Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.010831 4992 generic.go:334] "Generic (PLEG): container finished" podID="d11380fe-b009-4ce2-a338-796c2677de3b" containerID="a6e1edfb6dc9b543727ad08bf581b98d41e444de0c2c2ffc4d7a3ad0ae9f891c" exitCode=0 Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.011804 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" event={"ID":"d11380fe-b009-4ce2-a338-796c2677de3b","Type":"ContainerDied","Data":"a6e1edfb6dc9b543727ad08bf581b98d41e444de0c2c2ffc4d7a3ad0ae9f891c"} Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.014246 4992 generic.go:334] "Generic (PLEG): container finished" podID="e3b993d2-9421-48c0-b4fc-b9eef93186f4" containerID="bdfffe09a799e13926e3ab8126037b5fe6bfa838542b3205bf26318d1a4abfbb" exitCode=0 Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.014315 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" event={"ID":"e3b993d2-9421-48c0-b4fc-b9eef93186f4","Type":"ContainerDied","Data":"bdfffe09a799e13926e3ab8126037b5fe6bfa838542b3205bf26318d1a4abfbb"} Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.017813 4992 generic.go:334] "Generic (PLEG): container finished" podID="9698b65a-4246-466e-aac8-e7fe29c4063d" containerID="e4c16471d42aa49ac5bd0906c6071b56e62f5bfd12b3475bf547a0fbee8dce40" exitCode=0 Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.018116 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sw28r" event={"ID":"9698b65a-4246-466e-aac8-e7fe29c4063d","Type":"ContainerDied","Data":"e4c16471d42aa49ac5bd0906c6071b56e62f5bfd12b3475bf547a0fbee8dce40"} Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.039792 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86","Type":"ContainerStarted","Data":"927b7635eefc169acf25852c676408f7a4c81c7917766d53524b3940d271f915"} Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.044885 4992 generic.go:334] "Generic (PLEG): container finished" podID="72f9411b-61f4-4615-8653-5f90b629690d" containerID="daa8d142bc905225839350387f54d5d85e7e63e3eae6f27da2901a157fc2ea72" exitCode=0 Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.044997 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72f9411b-61f4-4615-8653-5f90b629690d","Type":"ContainerDied","Data":"daa8d142bc905225839350387f54d5d85e7e63e3eae6f27da2901a157fc2ea72"} Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.047286 4992 generic.go:334] "Generic (PLEG): container finished" podID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" containerID="259359c8e6c2d1faf101b6f5d0f1887b1de6ea4e766412b2cf6cbd8f9fc64fb4" exitCode=0 Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.047338 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a0fa5ac-9268-4db9-8e40-42aca5111af9","Type":"ContainerDied","Data":"259359c8e6c2d1faf101b6f5d0f1887b1de6ea4e766412b2cf6cbd8f9fc64fb4"} Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.048985 4992 generic.go:334] "Generic (PLEG): container finished" podID="141181c2-23c2-46f1-bea8-0af18021f8ed" containerID="f2db509f500b046a4d0b0b46bc256b2d8d8107dd50bfec88f3e0a10e8bfafe16" exitCode=0 Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.049027 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" event={"ID":"141181c2-23c2-46f1-bea8-0af18021f8ed","Type":"ContainerDied","Data":"f2db509f500b046a4d0b0b46bc256b2d8d8107dd50bfec88f3e0a10e8bfafe16"} Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.065796 4992 generic.go:334] "Generic (PLEG): container finished" podID="2fc10a15-b026-4617-b650-5eb0f8af0299" containerID="052f3b38c4c16e58fc030d231e69c58addacffcb61f4dc2c2f58e70ce1a508f8" exitCode=0 Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.065832 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" event={"ID":"2fc10a15-b026-4617-b650-5eb0f8af0299","Type":"ContainerDied","Data":"052f3b38c4c16e58fc030d231e69c58addacffcb61f4dc2c2f58e70ce1a508f8"} Dec 11 08:41:12 crc kubenswrapper[4992]: E1211 08:41:12.535090 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="a84aae8d-da28-42b4-80a4-99e157fb57ec" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.544391 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.555782 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.677820 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxljl\" (UniqueName: \"kubernetes.io/projected/e3b993d2-9421-48c0-b4fc-b9eef93186f4-kube-api-access-qxljl\") pod \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.678208 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv6sf\" (UniqueName: \"kubernetes.io/projected/2fc10a15-b026-4617-b650-5eb0f8af0299-kube-api-access-fv6sf\") pod \"2fc10a15-b026-4617-b650-5eb0f8af0299\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.678239 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-config\") pod \"2fc10a15-b026-4617-b650-5eb0f8af0299\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.678281 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-dns-svc\") pod \"2fc10a15-b026-4617-b650-5eb0f8af0299\" (UID: \"2fc10a15-b026-4617-b650-5eb0f8af0299\") " Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.678352 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-config\") pod \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.678401 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-dns-svc\") pod \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\" (UID: \"e3b993d2-9421-48c0-b4fc-b9eef93186f4\") " Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.684877 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc10a15-b026-4617-b650-5eb0f8af0299-kube-api-access-fv6sf" (OuterVolumeSpecName: "kube-api-access-fv6sf") pod "2fc10a15-b026-4617-b650-5eb0f8af0299" (UID: "2fc10a15-b026-4617-b650-5eb0f8af0299"). InnerVolumeSpecName "kube-api-access-fv6sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.699795 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b993d2-9421-48c0-b4fc-b9eef93186f4-kube-api-access-qxljl" (OuterVolumeSpecName: "kube-api-access-qxljl") pod "e3b993d2-9421-48c0-b4fc-b9eef93186f4" (UID: "e3b993d2-9421-48c0-b4fc-b9eef93186f4"). InnerVolumeSpecName "kube-api-access-qxljl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.724552 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e3b993d2-9421-48c0-b4fc-b9eef93186f4" (UID: "e3b993d2-9421-48c0-b4fc-b9eef93186f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.733122 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-config" (OuterVolumeSpecName: "config") pod "e3b993d2-9421-48c0-b4fc-b9eef93186f4" (UID: "e3b993d2-9421-48c0-b4fc-b9eef93186f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.733349 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2fc10a15-b026-4617-b650-5eb0f8af0299" (UID: "2fc10a15-b026-4617-b650-5eb0f8af0299"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.762062 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-config" (OuterVolumeSpecName: "config") pod "2fc10a15-b026-4617-b650-5eb0f8af0299" (UID: "2fc10a15-b026-4617-b650-5eb0f8af0299"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.783491 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.783528 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxljl\" (UniqueName: \"kubernetes.io/projected/e3b993d2-9421-48c0-b4fc-b9eef93186f4-kube-api-access-qxljl\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.783538 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv6sf\" (UniqueName: \"kubernetes.io/projected/2fc10a15-b026-4617-b650-5eb0f8af0299-kube-api-access-fv6sf\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.783547 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.783557 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2fc10a15-b026-4617-b650-5eb0f8af0299-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:12 crc kubenswrapper[4992]: I1211 08:41:12.783567 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3b993d2-9421-48c0-b4fc-b9eef93186f4-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.075914 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sw28r" event={"ID":"9698b65a-4246-466e-aac8-e7fe29c4063d","Type":"ContainerStarted","Data":"e43678c7d51496c804c255f21abc45740271655c02c5c0ae2c57d8ebea44cbfd"} Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.078225 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72f9411b-61f4-4615-8653-5f90b629690d","Type":"ContainerStarted","Data":"25983fdca35d0058db378bc88fb37a396b0fb7cf9b358a0d61704c6b9388d9fc"} Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.078375 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.083140 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a0fa5ac-9268-4db9-8e40-42aca5111af9","Type":"ContainerStarted","Data":"aa5295d6e2f7f8f85b1f37929a049b5cb2e5c256a729ae48c02399dada605cb2"} Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.083566 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.085589 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" event={"ID":"141181c2-23c2-46f1-bea8-0af18021f8ed","Type":"ContainerStarted","Data":"1bbea0374d2943ffdf678a1bf9dadda5d95ac984b94a5177482f543e7bb11490"} Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.085607 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.091930 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" event={"ID":"2fc10a15-b026-4617-b650-5eb0f8af0299","Type":"ContainerDied","Data":"f1b9a3fa8e09110c70b955c831ca3ecd05150bc0baf0fa6a5c896caf662b75dd"} Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.091972 4992 scope.go:117] "RemoveContainer" containerID="052f3b38c4c16e58fc030d231e69c58addacffcb61f4dc2c2f58e70ce1a508f8" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.092071 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-ckhzn" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.103741 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.956575574 podStartE2EDuration="1m30.103726394s" podCreationTimestamp="2025-12-11 08:39:43 +0000 UTC" firstStartedPulling="2025-12-11 08:39:45.624608521 +0000 UTC m=+1009.884082457" lastFinishedPulling="2025-12-11 08:40:37.771759351 +0000 UTC m=+1062.031233277" observedRunningTime="2025-12-11 08:41:13.103215512 +0000 UTC m=+1097.362689438" watchObservedRunningTime="2025-12-11 08:41:13.103726394 +0000 UTC m=+1097.363200320" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.107227 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a84aae8d-da28-42b4-80a4-99e157fb57ec","Type":"ContainerStarted","Data":"9f390995f4b9ca34e5af193e8514a3b31dd9802d06104fdb2f3a48075ffdbe30"} Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.121729 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" event={"ID":"d11380fe-b009-4ce2-a338-796c2677de3b","Type":"ContainerStarted","Data":"12b6d988132f276863db450bcca34399f66b9fa88f758ce91a81fd604dbfb6c2"} Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.122396 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.130470 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" event={"ID":"e3b993d2-9421-48c0-b4fc-b9eef93186f4","Type":"ContainerDied","Data":"c573f8476754e45c1ebbf0caa8fd5cf3e741daa7823f52f074019f1b48a52b21"} Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.130611 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qjhqv" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.140451 4992 scope.go:117] "RemoveContainer" containerID="bdfffe09a799e13926e3ab8126037b5fe6bfa838542b3205bf26318d1a4abfbb" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.159463 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" podStartSLOduration=15.15944551 podStartE2EDuration="15.15944551s" podCreationTimestamp="2025-12-11 08:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:13.155201826 +0000 UTC m=+1097.414675752" watchObservedRunningTime="2025-12-11 08:41:13.15944551 +0000 UTC m=+1097.418919436" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.196971 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.390406778 podStartE2EDuration="1m30.19695645s" podCreationTimestamp="2025-12-11 08:39:43 +0000 UTC" firstStartedPulling="2025-12-11 08:39:45.904328397 +0000 UTC m=+1010.163802323" lastFinishedPulling="2025-12-11 08:40:37.710878069 +0000 UTC m=+1061.970351995" observedRunningTime="2025-12-11 08:41:13.191132616 +0000 UTC m=+1097.450606552" watchObservedRunningTime="2025-12-11 08:41:13.19695645 +0000 UTC m=+1097.456430376" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.227500 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qjhqv"] Dec 11 08:41:13 crc kubenswrapper[4992]: E1211 08:41:13.229004 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="c51bf698-2728-4a49-b7e1-d80c304725e2" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.241464 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qjhqv"] Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.276264 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ckhzn"] Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.281860 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-ckhzn"] Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.290944 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" podStartSLOduration=15.290926773 podStartE2EDuration="15.290926773s" podCreationTimestamp="2025-12-11 08:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:13.288013482 +0000 UTC m=+1097.547487418" watchObservedRunningTime="2025-12-11 08:41:13.290926773 +0000 UTC m=+1097.550400699" Dec 11 08:41:13 crc kubenswrapper[4992]: I1211 08:41:13.586704 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.105377 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc10a15-b026-4617-b650-5eb0f8af0299" path="/var/lib/kubelet/pods/2fc10a15-b026-4617-b650-5eb0f8af0299/volumes" Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.106395 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b993d2-9421-48c0-b4fc-b9eef93186f4" path="/var/lib/kubelet/pods/e3b993d2-9421-48c0-b4fc-b9eef93186f4/volumes" Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.139295 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qw2v9" event={"ID":"25c315d3-3609-4a88-bf95-4beedb848ecf","Type":"ContainerStarted","Data":"bd26094deb1015b8ac85710fc651fc4cebd419de2147900adc6c0d98cbc99499"} Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.140957 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djp6h" event={"ID":"43b8eb34-f000-49af-bcf9-7507f85afd2b","Type":"ContainerStarted","Data":"3c0644165b463b780b83a7956a7d9f87541bcf60549e2d6a480e627a408df054"} Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.142426 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c51bf698-2728-4a49-b7e1-d80c304725e2","Type":"ContainerStarted","Data":"1427b5ce9a32804a021fc4ae59704ab6bafe359ca6e4bde5ee7928664cb72b61"} Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.149434 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-sw28r" event={"ID":"9698b65a-4246-466e-aac8-e7fe29c4063d","Type":"ContainerStarted","Data":"a203ed9f6ddadd6de733f60e15a2b7b8e0bdb1c2d79dda5036f7e1f5d4fc182b"} Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.167085 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qw2v9" podStartSLOduration=13.705826349 podStartE2EDuration="16.167069088s" podCreationTimestamp="2025-12-11 08:40:58 +0000 UTC" firstStartedPulling="2025-12-11 08:41:10.194486334 +0000 UTC m=+1094.453960260" lastFinishedPulling="2025-12-11 08:41:12.655729053 +0000 UTC m=+1096.915202999" observedRunningTime="2025-12-11 08:41:14.164191868 +0000 UTC m=+1098.423665794" watchObservedRunningTime="2025-12-11 08:41:14.167069088 +0000 UTC m=+1098.426543014" Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.190984 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-sw28r" podStartSLOduration=56.097363252 podStartE2EDuration="1m20.190951215s" podCreationTimestamp="2025-12-11 08:39:54 +0000 UTC" firstStartedPulling="2025-12-11 08:40:38.994936373 +0000 UTC m=+1063.254410299" lastFinishedPulling="2025-12-11 08:41:03.088524316 +0000 UTC m=+1087.347998262" observedRunningTime="2025-12-11 08:41:14.186274949 +0000 UTC m=+1098.445748875" watchObservedRunningTime="2025-12-11 08:41:14.190951215 +0000 UTC m=+1098.450425141" Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.484420 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:41:14 crc kubenswrapper[4992]: I1211 08:41:14.484489 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:41:15 crc kubenswrapper[4992]: I1211 08:41:15.156923 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-djp6h" Dec 11 08:41:15 crc kubenswrapper[4992]: I1211 08:41:15.174248 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-djp6h" podStartSLOduration=44.695731981 podStartE2EDuration="1m21.174228446s" podCreationTimestamp="2025-12-11 08:39:54 +0000 UTC" firstStartedPulling="2025-12-11 08:40:36.496400731 +0000 UTC m=+1060.755874657" lastFinishedPulling="2025-12-11 08:41:12.974897196 +0000 UTC m=+1097.234371122" observedRunningTime="2025-12-11 08:41:15.171044478 +0000 UTC m=+1099.430518414" watchObservedRunningTime="2025-12-11 08:41:15.174228446 +0000 UTC m=+1099.433702372" Dec 11 08:41:19 crc kubenswrapper[4992]: I1211 08:41:19.048977 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:41:19 crc kubenswrapper[4992]: I1211 08:41:19.176875 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:41:19 crc kubenswrapper[4992]: I1211 08:41:19.246806 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dh8p9"] Dec 11 08:41:19 crc kubenswrapper[4992]: I1211 08:41:19.247374 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" podUID="d11380fe-b009-4ce2-a338-796c2677de3b" containerName="dnsmasq-dns" containerID="cri-o://12b6d988132f276863db450bcca34399f66b9fa88f758ce91a81fd604dbfb6c2" gracePeriod=10 Dec 11 08:41:20 crc kubenswrapper[4992]: I1211 08:41:20.191452 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c51bf698-2728-4a49-b7e1-d80c304725e2","Type":"ContainerStarted","Data":"398d7279e4668e1b5bde96c188456e6d30c783e612fb84342fc187e60a3fd35a"} Dec 11 08:41:20 crc kubenswrapper[4992]: I1211 08:41:20.194991 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a84aae8d-da28-42b4-80a4-99e157fb57ec","Type":"ContainerStarted","Data":"81059d95766a0aca28c6024c484074c97beb39271c85d1d41c79080de8210d3e"} Dec 11 08:41:20 crc kubenswrapper[4992]: I1211 08:41:20.202793 4992 generic.go:334] "Generic (PLEG): container finished" podID="d11380fe-b009-4ce2-a338-796c2677de3b" containerID="12b6d988132f276863db450bcca34399f66b9fa88f758ce91a81fd604dbfb6c2" exitCode=0 Dec 11 08:41:20 crc kubenswrapper[4992]: I1211 08:41:20.202841 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" event={"ID":"d11380fe-b009-4ce2-a338-796c2677de3b","Type":"ContainerDied","Data":"12b6d988132f276863db450bcca34399f66b9fa88f758ce91a81fd604dbfb6c2"} Dec 11 08:41:20 crc kubenswrapper[4992]: I1211 08:41:20.215827 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=48.542293599 podStartE2EDuration="1m29.215804883s" podCreationTimestamp="2025-12-11 08:39:51 +0000 UTC" firstStartedPulling="2025-12-11 08:40:38.300285926 +0000 UTC m=+1062.559759852" lastFinishedPulling="2025-12-11 08:41:18.97379721 +0000 UTC m=+1103.233271136" observedRunningTime="2025-12-11 08:41:20.213443715 +0000 UTC m=+1104.472917651" watchObservedRunningTime="2025-12-11 08:41:20.215804883 +0000 UTC m=+1104.475278829" Dec 11 08:41:20 crc kubenswrapper[4992]: I1211 08:41:20.224858 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 11 08:41:20 crc kubenswrapper[4992]: I1211 08:41:20.250251 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=43.745261394 podStartE2EDuration="1m24.250233867s" podCreationTimestamp="2025-12-11 08:39:56 +0000 UTC" firstStartedPulling="2025-12-11 08:40:38.467518485 +0000 UTC m=+1062.726992411" lastFinishedPulling="2025-12-11 08:41:18.972490958 +0000 UTC m=+1103.231964884" observedRunningTime="2025-12-11 08:41:20.243548553 +0000 UTC m=+1104.503022469" watchObservedRunningTime="2025-12-11 08:41:20.250233867 +0000 UTC m=+1104.509707793" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.684836 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-d5c6v"] Dec 11 08:41:21 crc kubenswrapper[4992]: E1211 08:41:21.685486 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc10a15-b026-4617-b650-5eb0f8af0299" containerName="init" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.685500 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc10a15-b026-4617-b650-5eb0f8af0299" containerName="init" Dec 11 08:41:21 crc kubenswrapper[4992]: E1211 08:41:21.685517 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b993d2-9421-48c0-b4fc-b9eef93186f4" containerName="init" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.685523 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b993d2-9421-48c0-b4fc-b9eef93186f4" containerName="init" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.685691 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b993d2-9421-48c0-b4fc-b9eef93186f4" containerName="init" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.685709 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc10a15-b026-4617-b650-5eb0f8af0299" containerName="init" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.686602 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.711160 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d5c6v"] Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.783389 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.849248 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-config\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.849298 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-dns-svc\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.849329 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.849542 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9ssz\" (UniqueName: \"kubernetes.io/projected/4c0a2193-f200-40ba-9038-68c99922f75a-kube-api-access-g9ssz\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.849797 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.951444 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.951510 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-config\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.951531 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-dns-svc\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.951559 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.951595 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9ssz\" (UniqueName: \"kubernetes.io/projected/4c0a2193-f200-40ba-9038-68c99922f75a-kube-api-access-g9ssz\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.952543 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-config\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.952569 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.952718 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.952742 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-dns-svc\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:21 crc kubenswrapper[4992]: I1211 08:41:21.971512 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9ssz\" (UniqueName: \"kubernetes.io/projected/4c0a2193-f200-40ba-9038-68c99922f75a-kube-api-access-g9ssz\") pod \"dnsmasq-dns-698758b865-d5c6v\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.015683 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.494852 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d5c6v"] Dec 11 08:41:22 crc kubenswrapper[4992]: W1211 08:41:22.506448 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c0a2193_f200_40ba_9038_68c99922f75a.slice/crio-7609d77e074ccef179d1a077b49f1a15242f838c6bc9ad1c337cc2219cceb1c7 WatchSource:0}: Error finding container 7609d77e074ccef179d1a077b49f1a15242f838c6bc9ad1c337cc2219cceb1c7: Status 404 returned error can't find the container with id 7609d77e074ccef179d1a077b49f1a15242f838c6bc9ad1c337cc2219cceb1c7 Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.563835 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.663903 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7b58\" (UniqueName: \"kubernetes.io/projected/d11380fe-b009-4ce2-a338-796c2677de3b-kube-api-access-q7b58\") pod \"d11380fe-b009-4ce2-a338-796c2677de3b\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.663995 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-config\") pod \"d11380fe-b009-4ce2-a338-796c2677de3b\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.664046 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-ovsdbserver-nb\") pod \"d11380fe-b009-4ce2-a338-796c2677de3b\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.664093 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-dns-svc\") pod \"d11380fe-b009-4ce2-a338-796c2677de3b\" (UID: \"d11380fe-b009-4ce2-a338-796c2677de3b\") " Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.669594 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11380fe-b009-4ce2-a338-796c2677de3b-kube-api-access-q7b58" (OuterVolumeSpecName: "kube-api-access-q7b58") pod "d11380fe-b009-4ce2-a338-796c2677de3b" (UID: "d11380fe-b009-4ce2-a338-796c2677de3b"). InnerVolumeSpecName "kube-api-access-q7b58". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.711685 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d11380fe-b009-4ce2-a338-796c2677de3b" (UID: "d11380fe-b009-4ce2-a338-796c2677de3b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.712010 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d11380fe-b009-4ce2-a338-796c2677de3b" (UID: "d11380fe-b009-4ce2-a338-796c2677de3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.716267 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-config" (OuterVolumeSpecName: "config") pod "d11380fe-b009-4ce2-a338-796c2677de3b" (UID: "d11380fe-b009-4ce2-a338-796c2677de3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.766096 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.766147 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.766159 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d11380fe-b009-4ce2-a338-796c2677de3b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.766170 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7b58\" (UniqueName: \"kubernetes.io/projected/d11380fe-b009-4ce2-a338-796c2677de3b-kube-api-access-q7b58\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.783151 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.849685 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 11 08:41:22 crc kubenswrapper[4992]: E1211 08:41:22.850577 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11380fe-b009-4ce2-a338-796c2677de3b" containerName="dnsmasq-dns" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.850598 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11380fe-b009-4ce2-a338-796c2677de3b" containerName="dnsmasq-dns" Dec 11 08:41:22 crc kubenswrapper[4992]: E1211 08:41:22.850663 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11380fe-b009-4ce2-a338-796c2677de3b" containerName="init" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.850671 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11380fe-b009-4ce2-a338-796c2677de3b" containerName="init" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.850997 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11380fe-b009-4ce2-a338-796c2677de3b" containerName="dnsmasq-dns" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.897549 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.901650 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.901775 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-dqprl" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.905858 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.909001 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.922574 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.970257 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.970328 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-cache\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.970372 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rgzx\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-kube-api-access-5rgzx\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.970482 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-lock\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:22 crc kubenswrapper[4992]: I1211 08:41:22.970531 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.072333 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rgzx\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-kube-api-access-5rgzx\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.074898 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-lock\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.074935 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.075000 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.075028 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-cache\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.075473 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-cache\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: E1211 08:41:23.075888 4992 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 08:41:23 crc kubenswrapper[4992]: E1211 08:41:23.075950 4992 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 08:41:23 crc kubenswrapper[4992]: E1211 08:41:23.076014 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift podName:47851b57-2a65-4a8a-b2a2-f01a5a2d7833 nodeName:}" failed. No retries permitted until 2025-12-11 08:41:23.57599197 +0000 UTC m=+1107.835465966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift") pod "swift-storage-0" (UID: "47851b57-2a65-4a8a-b2a2-f01a5a2d7833") : configmap "swift-ring-files" not found Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.076132 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.076374 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-lock\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.107552 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rgzx\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-kube-api-access-5rgzx\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.119365 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.224743 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.227005 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f99cf716-c024-485a-8d47-20218de1cb10","Type":"ContainerStarted","Data":"b2c192bbc9693174279048dbd4aaf94b35fbc1cc5590bf74ee51197729b6b44e"} Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.227243 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.229382 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" event={"ID":"d11380fe-b009-4ce2-a338-796c2677de3b","Type":"ContainerDied","Data":"69052f173c43e2f6e525cc1dc21e7372fdb7776cad6eb20955c777b6ce143c81"} Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.229408 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-dh8p9" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.229427 4992 scope.go:117] "RemoveContainer" containerID="12b6d988132f276863db450bcca34399f66b9fa88f758ce91a81fd604dbfb6c2" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.234134 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d5c6v" event={"ID":"4c0a2193-f200-40ba-9038-68c99922f75a","Type":"ContainerStarted","Data":"56fe2d93d804795df924f20b91f8a282889092bd038aa4c88ca421ebac3e98d3"} Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.234172 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d5c6v" event={"ID":"4c0a2193-f200-40ba-9038-68c99922f75a","Type":"ContainerStarted","Data":"7609d77e074ccef179d1a077b49f1a15242f838c6bc9ad1c337cc2219cceb1c7"} Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.248968 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=47.186843646 podStartE2EDuration="1m32.24895155s" podCreationTimestamp="2025-12-11 08:39:51 +0000 UTC" firstStartedPulling="2025-12-11 08:40:37.711656009 +0000 UTC m=+1061.971129935" lastFinishedPulling="2025-12-11 08:41:22.773763913 +0000 UTC m=+1107.033237839" observedRunningTime="2025-12-11 08:41:23.248102839 +0000 UTC m=+1107.507576765" watchObservedRunningTime="2025-12-11 08:41:23.24895155 +0000 UTC m=+1107.508425476" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.251003 4992 scope.go:117] "RemoveContainer" containerID="a6e1edfb6dc9b543727ad08bf581b98d41e444de0c2c2ffc4d7a3ad0ae9f891c" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.265787 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.319704 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dh8p9"] Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.328206 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dh8p9"] Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.349676 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dpvlt"] Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.351436 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.354605 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.354650 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.354954 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.363708 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dpvlt"] Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.388432 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-scripts\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.388505 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-combined-ca-bundle\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.388567 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmvps\" (UniqueName: \"kubernetes.io/projected/95883dfb-ad1a-4d13-889e-4b9f73ded332-kube-api-access-mmvps\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.388585 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-ring-data-devices\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.388608 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-dispersionconf\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.388650 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-swiftconf\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.388695 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95883dfb-ad1a-4d13-889e-4b9f73ded332-etc-swift\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.490546 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-scripts\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.490650 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-combined-ca-bundle\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.490749 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmvps\" (UniqueName: \"kubernetes.io/projected/95883dfb-ad1a-4d13-889e-4b9f73ded332-kube-api-access-mmvps\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.490774 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-ring-data-devices\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.490806 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-dispersionconf\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.490846 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-swiftconf\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.490904 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95883dfb-ad1a-4d13-889e-4b9f73ded332-etc-swift\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.491242 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-scripts\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.491330 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95883dfb-ad1a-4d13-889e-4b9f73ded332-etc-swift\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.491498 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-ring-data-devices\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.494677 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-combined-ca-bundle\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.494726 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-swiftconf\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.499143 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-dispersionconf\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.515926 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmvps\" (UniqueName: \"kubernetes.io/projected/95883dfb-ad1a-4d13-889e-4b9f73ded332-kube-api-access-mmvps\") pod \"swift-ring-rebalance-dpvlt\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.592600 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:23 crc kubenswrapper[4992]: E1211 08:41:23.592767 4992 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 08:41:23 crc kubenswrapper[4992]: E1211 08:41:23.593026 4992 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 08:41:23 crc kubenswrapper[4992]: E1211 08:41:23.593094 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift podName:47851b57-2a65-4a8a-b2a2-f01a5a2d7833 nodeName:}" failed. No retries permitted until 2025-12-11 08:41:24.593074896 +0000 UTC m=+1108.852548822 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift") pod "swift-storage-0" (UID: "47851b57-2a65-4a8a-b2a2-f01a5a2d7833") : configmap "swift-ring-files" not found Dec 11 08:41:23 crc kubenswrapper[4992]: I1211 08:41:23.665526 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.103329 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11380fe-b009-4ce2-a338-796c2677de3b" path="/var/lib/kubelet/pods/d11380fe-b009-4ce2-a338-796c2677de3b/volumes" Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.126113 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dpvlt"] Dec 11 08:41:24 crc kubenswrapper[4992]: W1211 08:41:24.126607 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95883dfb_ad1a_4d13_889e_4b9f73ded332.slice/crio-46670349d8c626264da22476d63a94db98fae0ee99896cf462a04b7f43c9dd40 WatchSource:0}: Error finding container 46670349d8c626264da22476d63a94db98fae0ee99896cf462a04b7f43c9dd40: Status 404 returned error can't find the container with id 46670349d8c626264da22476d63a94db98fae0ee99896cf462a04b7f43c9dd40 Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.241826 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dpvlt" event={"ID":"95883dfb-ad1a-4d13-889e-4b9f73ded332","Type":"ContainerStarted","Data":"46670349d8c626264da22476d63a94db98fae0ee99896cf462a04b7f43c9dd40"} Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.245139 4992 generic.go:334] "Generic (PLEG): container finished" podID="4c0a2193-f200-40ba-9038-68c99922f75a" containerID="56fe2d93d804795df924f20b91f8a282889092bd038aa4c88ca421ebac3e98d3" exitCode=0 Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.245191 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d5c6v" event={"ID":"4c0a2193-f200-40ba-9038-68c99922f75a","Type":"ContainerDied","Data":"56fe2d93d804795df924f20b91f8a282889092bd038aa4c88ca421ebac3e98d3"} Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.289957 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.611296 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:24 crc kubenswrapper[4992]: E1211 08:41:24.611469 4992 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 08:41:24 crc kubenswrapper[4992]: E1211 08:41:24.611684 4992 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 08:41:24 crc kubenswrapper[4992]: E1211 08:41:24.611732 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift podName:47851b57-2a65-4a8a-b2a2-f01a5a2d7833 nodeName:}" failed. No retries permitted until 2025-12-11 08:41:26.611718414 +0000 UTC m=+1110.871192340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift") pod "swift-storage-0" (UID: "47851b57-2a65-4a8a-b2a2-f01a5a2d7833") : configmap "swift-ring-files" not found Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.826000 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.869813 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 11 08:41:24 crc kubenswrapper[4992]: I1211 08:41:24.900240 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.056088 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.058318 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.063093 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.063444 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-rhrmh" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.063645 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.066011 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.100928 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.119801 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.119868 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.119922 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.119954 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d4tk\" (UniqueName: \"kubernetes.io/projected/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-kube-api-access-2d4tk\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.119981 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-scripts\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.120021 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.120044 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-config\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.221344 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.221412 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d4tk\" (UniqueName: \"kubernetes.io/projected/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-kube-api-access-2d4tk\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.221454 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-scripts\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.221498 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.221529 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-config\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.221668 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.221706 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.221965 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.222450 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-config\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.223153 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-scripts\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.236382 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.246005 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.246300 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.246505 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d4tk\" (UniqueName: \"kubernetes.io/projected/5b008aff-f3e4-46b6-a5ff-52e0d80374d8-kube-api-access-2d4tk\") pod \"ovn-northd-0\" (UID: \"5b008aff-f3e4-46b6-a5ff-52e0d80374d8\") " pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.254708 4992 generic.go:334] "Generic (PLEG): container finished" podID="e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67" containerID="47ad442bdfdb5eee7aad192cc83ed9ab8a169f14fad66b9da17f00227e79c4f9" exitCode=0 Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.255624 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67","Type":"ContainerDied","Data":"47ad442bdfdb5eee7aad192cc83ed9ab8a169f14fad66b9da17f00227e79c4f9"} Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.275511 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d5c6v" event={"ID":"4c0a2193-f200-40ba-9038-68c99922f75a","Type":"ContainerStarted","Data":"6004d367fdde689658b0eb9cc2eb4dec41e5a972c9aeb6c12d93325e6ff7a000"} Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.275971 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.367831 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.385992 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 08:41:25 crc kubenswrapper[4992]: I1211 08:41:25.485146 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-d5c6v" podStartSLOduration=4.485113062 podStartE2EDuration="4.485113062s" podCreationTimestamp="2025-12-11 08:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:25.405398958 +0000 UTC m=+1109.664872894" watchObservedRunningTime="2025-12-11 08:41:25.485113062 +0000 UTC m=+1109.744587418" Dec 11 08:41:26 crc kubenswrapper[4992]: I1211 08:41:26.126984 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 08:41:26 crc kubenswrapper[4992]: I1211 08:41:26.285908 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67","Type":"ContainerStarted","Data":"9b35501c011b33bae8c16be846c220b2e4e7c29a04d689b2fd7b7d638ef4d94c"} Dec 11 08:41:26 crc kubenswrapper[4992]: I1211 08:41:26.288670 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5b008aff-f3e4-46b6-a5ff-52e0d80374d8","Type":"ContainerStarted","Data":"315fce0b5206661298998737dc05a88ac715869515d3cf2edbc5d458dc024885"} Dec 11 08:41:26 crc kubenswrapper[4992]: I1211 08:41:26.292793 4992 generic.go:334] "Generic (PLEG): container finished" podID="0c5eb79c-8f1c-4416-ab38-00b67e0b3f86" containerID="927b7635eefc169acf25852c676408f7a4c81c7917766d53524b3940d271f915" exitCode=0 Dec 11 08:41:26 crc kubenswrapper[4992]: I1211 08:41:26.292893 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86","Type":"ContainerDied","Data":"927b7635eefc169acf25852c676408f7a4c81c7917766d53524b3940d271f915"} Dec 11 08:41:26 crc kubenswrapper[4992]: I1211 08:41:26.318551 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.76138171 podStartE2EDuration="1m41.31853291s" podCreationTimestamp="2025-12-11 08:39:45 +0000 UTC" firstStartedPulling="2025-12-11 08:39:47.233357124 +0000 UTC m=+1011.492831040" lastFinishedPulling="2025-12-11 08:41:04.790508304 +0000 UTC m=+1089.049982240" observedRunningTime="2025-12-11 08:41:26.314349068 +0000 UTC m=+1110.573823014" watchObservedRunningTime="2025-12-11 08:41:26.31853291 +0000 UTC m=+1110.578006826" Dec 11 08:41:26 crc kubenswrapper[4992]: I1211 08:41:26.665373 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 11 08:41:26 crc kubenswrapper[4992]: I1211 08:41:26.665752 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 11 08:41:26 crc kubenswrapper[4992]: I1211 08:41:26.666186 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:26 crc kubenswrapper[4992]: E1211 08:41:26.666337 4992 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 08:41:26 crc kubenswrapper[4992]: E1211 08:41:26.666364 4992 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 08:41:26 crc kubenswrapper[4992]: E1211 08:41:26.666427 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift podName:47851b57-2a65-4a8a-b2a2-f01a5a2d7833 nodeName:}" failed. No retries permitted until 2025-12-11 08:41:30.666407988 +0000 UTC m=+1114.925881914 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift") pod "swift-storage-0" (UID: "47851b57-2a65-4a8a-b2a2-f01a5a2d7833") : configmap "swift-ring-files" not found Dec 11 08:41:27 crc kubenswrapper[4992]: I1211 08:41:27.305370 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0c5eb79c-8f1c-4416-ab38-00b67e0b3f86","Type":"ContainerStarted","Data":"8ebc83fbf56f023f0780f96e2c7b1355e3f0d6f71f0e38f64a178e18b613e9bb"} Dec 11 08:41:27 crc kubenswrapper[4992]: I1211 08:41:27.330614 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371935.524187 podStartE2EDuration="1m41.330588578s" podCreationTimestamp="2025-12-11 08:39:46 +0000 UTC" firstStartedPulling="2025-12-11 08:39:56.923487744 +0000 UTC m=+1021.182961670" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:27.327939013 +0000 UTC m=+1111.587412959" watchObservedRunningTime="2025-12-11 08:41:27.330588578 +0000 UTC m=+1111.590062494" Dec 11 08:41:28 crc kubenswrapper[4992]: I1211 08:41:28.104248 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 11 08:41:28 crc kubenswrapper[4992]: I1211 08:41:28.104298 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 11 08:41:30 crc kubenswrapper[4992]: I1211 08:41:30.731590 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:30 crc kubenswrapper[4992]: E1211 08:41:30.731828 4992 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 08:41:30 crc kubenswrapper[4992]: E1211 08:41:30.733656 4992 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 08:41:30 crc kubenswrapper[4992]: E1211 08:41:30.733743 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift podName:47851b57-2a65-4a8a-b2a2-f01a5a2d7833 nodeName:}" failed. No retries permitted until 2025-12-11 08:41:38.733721185 +0000 UTC m=+1122.993195111 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift") pod "swift-storage-0" (UID: "47851b57-2a65-4a8a-b2a2-f01a5a2d7833") : configmap "swift-ring-files" not found Dec 11 08:41:31 crc kubenswrapper[4992]: I1211 08:41:31.561424 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 08:41:32 crc kubenswrapper[4992]: I1211 08:41:32.016856 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:41:32 crc kubenswrapper[4992]: I1211 08:41:32.076317 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7m76v"] Dec 11 08:41:32 crc kubenswrapper[4992]: I1211 08:41:32.076555 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" podUID="141181c2-23c2-46f1-bea8-0af18021f8ed" containerName="dnsmasq-dns" containerID="cri-o://1bbea0374d2943ffdf678a1bf9dadda5d95ac984b94a5177482f543e7bb11490" gracePeriod=10 Dec 11 08:41:32 crc kubenswrapper[4992]: I1211 08:41:32.344535 4992 generic.go:334] "Generic (PLEG): container finished" podID="141181c2-23c2-46f1-bea8-0af18021f8ed" containerID="1bbea0374d2943ffdf678a1bf9dadda5d95ac984b94a5177482f543e7bb11490" exitCode=0 Dec 11 08:41:32 crc kubenswrapper[4992]: I1211 08:41:32.344624 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" event={"ID":"141181c2-23c2-46f1-bea8-0af18021f8ed","Type":"ContainerDied","Data":"1bbea0374d2943ffdf678a1bf9dadda5d95ac984b94a5177482f543e7bb11490"} Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.015832 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.072055 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.107478 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.182411 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-sb\") pod \"141181c2-23c2-46f1-bea8-0af18021f8ed\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.182568 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-dns-svc\") pod \"141181c2-23c2-46f1-bea8-0af18021f8ed\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.182622 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-nb\") pod \"141181c2-23c2-46f1-bea8-0af18021f8ed\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.182685 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-config\") pod \"141181c2-23c2-46f1-bea8-0af18021f8ed\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.182727 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwm5w\" (UniqueName: \"kubernetes.io/projected/141181c2-23c2-46f1-bea8-0af18021f8ed-kube-api-access-pwm5w\") pod \"141181c2-23c2-46f1-bea8-0af18021f8ed\" (UID: \"141181c2-23c2-46f1-bea8-0af18021f8ed\") " Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.189735 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141181c2-23c2-46f1-bea8-0af18021f8ed-kube-api-access-pwm5w" (OuterVolumeSpecName: "kube-api-access-pwm5w") pod "141181c2-23c2-46f1-bea8-0af18021f8ed" (UID: "141181c2-23c2-46f1-bea8-0af18021f8ed"). InnerVolumeSpecName "kube-api-access-pwm5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.224445 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-config" (OuterVolumeSpecName: "config") pod "141181c2-23c2-46f1-bea8-0af18021f8ed" (UID: "141181c2-23c2-46f1-bea8-0af18021f8ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.236447 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "141181c2-23c2-46f1-bea8-0af18021f8ed" (UID: "141181c2-23c2-46f1-bea8-0af18021f8ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.252562 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "141181c2-23c2-46f1-bea8-0af18021f8ed" (UID: "141181c2-23c2-46f1-bea8-0af18021f8ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.255486 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "141181c2-23c2-46f1-bea8-0af18021f8ed" (UID: "141181c2-23c2-46f1-bea8-0af18021f8ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.284812 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.284845 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.284854 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwm5w\" (UniqueName: \"kubernetes.io/projected/141181c2-23c2-46f1-bea8-0af18021f8ed-kube-api-access-pwm5w\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.284866 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.284874 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/141181c2-23c2-46f1-bea8-0af18021f8ed-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.355061 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" event={"ID":"141181c2-23c2-46f1-bea8-0af18021f8ed","Type":"ContainerDied","Data":"c4b9214df682e32648c317bf5a8646808943179b850e7c7f701f672899994430"} Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.355092 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-7m76v" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.355124 4992 scope.go:117] "RemoveContainer" containerID="1bbea0374d2943ffdf678a1bf9dadda5d95ac984b94a5177482f543e7bb11490" Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.390535 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7m76v"] Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.399469 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-7m76v"] Dec 11 08:41:33 crc kubenswrapper[4992]: I1211 08:41:33.700832 4992 scope.go:117] "RemoveContainer" containerID="f2db509f500b046a4d0b0b46bc256b2d8d8107dd50bfec88f3e0a10e8bfafe16" Dec 11 08:41:34 crc kubenswrapper[4992]: I1211 08:41:34.109061 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141181c2-23c2-46f1-bea8-0af18021f8ed" path="/var/lib/kubelet/pods/141181c2-23c2-46f1-bea8-0af18021f8ed/volumes" Dec 11 08:41:34 crc kubenswrapper[4992]: I1211 08:41:34.364368 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dpvlt" event={"ID":"95883dfb-ad1a-4d13-889e-4b9f73ded332","Type":"ContainerStarted","Data":"190765524b1ccbb84f52f0ffa1b930605699ac2e7e3bff018e74f01aedbfc9aa"} Dec 11 08:41:34 crc kubenswrapper[4992]: I1211 08:41:34.373036 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5b008aff-f3e4-46b6-a5ff-52e0d80374d8","Type":"ContainerStarted","Data":"0e533621f21c2d4285a298bd8bdb00d605d3e03d6c58e4d22938708c79e701d9"} Dec 11 08:41:34 crc kubenswrapper[4992]: I1211 08:41:34.373091 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5b008aff-f3e4-46b6-a5ff-52e0d80374d8","Type":"ContainerStarted","Data":"f00e55e992b354f1ee448e4e283bd9ea37434f24643d9f05e2dd023a9bb94843"} Dec 11 08:41:34 crc kubenswrapper[4992]: I1211 08:41:34.374202 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 11 08:41:34 crc kubenswrapper[4992]: I1211 08:41:34.402277 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dpvlt" podStartSLOduration=1.829836931 podStartE2EDuration="11.402233776s" podCreationTimestamp="2025-12-11 08:41:23 +0000 UTC" firstStartedPulling="2025-12-11 08:41:24.129097964 +0000 UTC m=+1108.388571890" lastFinishedPulling="2025-12-11 08:41:33.701494809 +0000 UTC m=+1117.960968735" observedRunningTime="2025-12-11 08:41:34.397070989 +0000 UTC m=+1118.656544925" watchObservedRunningTime="2025-12-11 08:41:34.402233776 +0000 UTC m=+1118.661707702" Dec 11 08:41:34 crc kubenswrapper[4992]: I1211 08:41:34.433560 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.718803732 podStartE2EDuration="9.433540773s" podCreationTimestamp="2025-12-11 08:41:25 +0000 UTC" firstStartedPulling="2025-12-11 08:41:26.143540891 +0000 UTC m=+1110.403014817" lastFinishedPulling="2025-12-11 08:41:33.858277922 +0000 UTC m=+1118.117751858" observedRunningTime="2025-12-11 08:41:34.428188811 +0000 UTC m=+1118.687662737" watchObservedRunningTime="2025-12-11 08:41:34.433540773 +0000 UTC m=+1118.693014699" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.389319 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-jbtss"] Dec 11 08:41:35 crc kubenswrapper[4992]: E1211 08:41:35.390224 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141181c2-23c2-46f1-bea8-0af18021f8ed" containerName="init" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.390243 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="141181c2-23c2-46f1-bea8-0af18021f8ed" containerName="init" Dec 11 08:41:35 crc kubenswrapper[4992]: E1211 08:41:35.390293 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141181c2-23c2-46f1-bea8-0af18021f8ed" containerName="dnsmasq-dns" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.390302 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="141181c2-23c2-46f1-bea8-0af18021f8ed" containerName="dnsmasq-dns" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.390509 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="141181c2-23c2-46f1-bea8-0af18021f8ed" containerName="dnsmasq-dns" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.391277 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.400714 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jbtss"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.408948 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6drxf"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.410605 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.427341 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6drxf"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.512654 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lxx2h"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.513986 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.520586 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lxx2h"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.533158 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7979a378-800d-4749-bc4b-98f4c85b2624-operator-scripts\") pod \"cinder-db-create-6drxf\" (UID: \"7979a378-800d-4749-bc4b-98f4c85b2624\") " pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.533385 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094dc08d-1b1c-4481-8ab4-127dc18a6b01-operator-scripts\") pod \"barbican-db-create-jbtss\" (UID: \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\") " pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.533452 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kg4z\" (UniqueName: \"kubernetes.io/projected/7979a378-800d-4749-bc4b-98f4c85b2624-kube-api-access-4kg4z\") pod \"cinder-db-create-6drxf\" (UID: \"7979a378-800d-4749-bc4b-98f4c85b2624\") " pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.533883 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdqhw\" (UniqueName: \"kubernetes.io/projected/094dc08d-1b1c-4481-8ab4-127dc18a6b01-kube-api-access-sdqhw\") pod \"barbican-db-create-jbtss\" (UID: \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\") " pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.608220 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fc5b-account-create-update-jjhh4"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.609433 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.616497 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.622083 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fc5b-account-create-update-jjhh4"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.636445 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdqhw\" (UniqueName: \"kubernetes.io/projected/094dc08d-1b1c-4481-8ab4-127dc18a6b01-kube-api-access-sdqhw\") pod \"barbican-db-create-jbtss\" (UID: \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\") " pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.636594 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz5sz\" (UniqueName: \"kubernetes.io/projected/4aa3c025-0da1-4697-bba1-57cb62d804e5-kube-api-access-rz5sz\") pod \"neutron-db-create-lxx2h\" (UID: \"4aa3c025-0da1-4697-bba1-57cb62d804e5\") " pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.636695 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7979a378-800d-4749-bc4b-98f4c85b2624-operator-scripts\") pod \"cinder-db-create-6drxf\" (UID: \"7979a378-800d-4749-bc4b-98f4c85b2624\") " pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.636753 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094dc08d-1b1c-4481-8ab4-127dc18a6b01-operator-scripts\") pod \"barbican-db-create-jbtss\" (UID: \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\") " pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.636780 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kg4z\" (UniqueName: \"kubernetes.io/projected/7979a378-800d-4749-bc4b-98f4c85b2624-kube-api-access-4kg4z\") pod \"cinder-db-create-6drxf\" (UID: \"7979a378-800d-4749-bc4b-98f4c85b2624\") " pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.636858 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa3c025-0da1-4697-bba1-57cb62d804e5-operator-scripts\") pod \"neutron-db-create-lxx2h\" (UID: \"4aa3c025-0da1-4697-bba1-57cb62d804e5\") " pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.638071 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7979a378-800d-4749-bc4b-98f4c85b2624-operator-scripts\") pod \"cinder-db-create-6drxf\" (UID: \"7979a378-800d-4749-bc4b-98f4c85b2624\") " pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.638566 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094dc08d-1b1c-4481-8ab4-127dc18a6b01-operator-scripts\") pod \"barbican-db-create-jbtss\" (UID: \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\") " pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.660066 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdqhw\" (UniqueName: \"kubernetes.io/projected/094dc08d-1b1c-4481-8ab4-127dc18a6b01-kube-api-access-sdqhw\") pod \"barbican-db-create-jbtss\" (UID: \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\") " pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.660479 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kg4z\" (UniqueName: \"kubernetes.io/projected/7979a378-800d-4749-bc4b-98f4c85b2624-kube-api-access-4kg4z\") pod \"cinder-db-create-6drxf\" (UID: \"7979a378-800d-4749-bc4b-98f4c85b2624\") " pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.712305 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.716018 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-35b4-account-create-update-hxsnq"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.717363 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.719147 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.732458 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-35b4-account-create-update-hxsnq"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.732467 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.741170 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz5sz\" (UniqueName: \"kubernetes.io/projected/4aa3c025-0da1-4697-bba1-57cb62d804e5-kube-api-access-rz5sz\") pod \"neutron-db-create-lxx2h\" (UID: \"4aa3c025-0da1-4697-bba1-57cb62d804e5\") " pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.741262 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-operator-scripts\") pod \"barbican-fc5b-account-create-update-jjhh4\" (UID: \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\") " pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.741294 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhz8c\" (UniqueName: \"kubernetes.io/projected/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-kube-api-access-lhz8c\") pod \"barbican-fc5b-account-create-update-jjhh4\" (UID: \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\") " pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.741354 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa3c025-0da1-4697-bba1-57cb62d804e5-operator-scripts\") pod \"neutron-db-create-lxx2h\" (UID: \"4aa3c025-0da1-4697-bba1-57cb62d804e5\") " pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.742124 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa3c025-0da1-4697-bba1-57cb62d804e5-operator-scripts\") pod \"neutron-db-create-lxx2h\" (UID: \"4aa3c025-0da1-4697-bba1-57cb62d804e5\") " pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.771402 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz5sz\" (UniqueName: \"kubernetes.io/projected/4aa3c025-0da1-4697-bba1-57cb62d804e5-kube-api-access-rz5sz\") pod \"neutron-db-create-lxx2h\" (UID: \"4aa3c025-0da1-4697-bba1-57cb62d804e5\") " pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.810231 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b044-account-create-update-d7xvv"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.811338 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.813885 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.829363 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.843809 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b044-account-create-update-d7xvv"] Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.845447 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59aaff84-4003-4bf1-ba6b-c1dbadc40702-operator-scripts\") pod \"cinder-35b4-account-create-update-hxsnq\" (UID: \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\") " pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.845606 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4m2\" (UniqueName: \"kubernetes.io/projected/59aaff84-4003-4bf1-ba6b-c1dbadc40702-kube-api-access-7c4m2\") pod \"cinder-35b4-account-create-update-hxsnq\" (UID: \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\") " pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.845677 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-operator-scripts\") pod \"barbican-fc5b-account-create-update-jjhh4\" (UID: \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\") " pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.845729 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhz8c\" (UniqueName: \"kubernetes.io/projected/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-kube-api-access-lhz8c\") pod \"barbican-fc5b-account-create-update-jjhh4\" (UID: \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\") " pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.852214 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-operator-scripts\") pod \"barbican-fc5b-account-create-update-jjhh4\" (UID: \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\") " pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.869349 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhz8c\" (UniqueName: \"kubernetes.io/projected/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-kube-api-access-lhz8c\") pod \"barbican-fc5b-account-create-update-jjhh4\" (UID: \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\") " pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.925604 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.948455 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4m2\" (UniqueName: \"kubernetes.io/projected/59aaff84-4003-4bf1-ba6b-c1dbadc40702-kube-api-access-7c4m2\") pod \"cinder-35b4-account-create-update-hxsnq\" (UID: \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\") " pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.948507 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdr2\" (UniqueName: \"kubernetes.io/projected/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-kube-api-access-sfdr2\") pod \"neutron-b044-account-create-update-d7xvv\" (UID: \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\") " pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.948555 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-operator-scripts\") pod \"neutron-b044-account-create-update-d7xvv\" (UID: \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\") " pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.948697 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59aaff84-4003-4bf1-ba6b-c1dbadc40702-operator-scripts\") pod \"cinder-35b4-account-create-update-hxsnq\" (UID: \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\") " pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.949410 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59aaff84-4003-4bf1-ba6b-c1dbadc40702-operator-scripts\") pod \"cinder-35b4-account-create-update-hxsnq\" (UID: \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\") " pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:35 crc kubenswrapper[4992]: I1211 08:41:35.966029 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4m2\" (UniqueName: \"kubernetes.io/projected/59aaff84-4003-4bf1-ba6b-c1dbadc40702-kube-api-access-7c4m2\") pod \"cinder-35b4-account-create-update-hxsnq\" (UID: \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\") " pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.050017 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdr2\" (UniqueName: \"kubernetes.io/projected/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-kube-api-access-sfdr2\") pod \"neutron-b044-account-create-update-d7xvv\" (UID: \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\") " pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.050346 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-operator-scripts\") pod \"neutron-b044-account-create-update-d7xvv\" (UID: \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\") " pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.051182 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-operator-scripts\") pod \"neutron-b044-account-create-update-d7xvv\" (UID: \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\") " pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.074023 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdr2\" (UniqueName: \"kubernetes.io/projected/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-kube-api-access-sfdr2\") pod \"neutron-b044-account-create-update-d7xvv\" (UID: \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\") " pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.166127 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.208564 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.296233 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.299859 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6drxf"] Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.306256 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-jbtss"] Dec 11 08:41:36 crc kubenswrapper[4992]: W1211 08:41:36.314336 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod094dc08d_1b1c_4481_8ab4_127dc18a6b01.slice/crio-8e20f29472d178fa95c163ed1ca8aeb5f560a27c9551e772331446d1e56ad2cc WatchSource:0}: Error finding container 8e20f29472d178fa95c163ed1ca8aeb5f560a27c9551e772331446d1e56ad2cc: Status 404 returned error can't find the container with id 8e20f29472d178fa95c163ed1ca8aeb5f560a27c9551e772331446d1e56ad2cc Dec 11 08:41:36 crc kubenswrapper[4992]: W1211 08:41:36.336061 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7979a378_800d_4749_bc4b_98f4c85b2624.slice/crio-205702e9ab6b42614f395731faa59bd84cd786fc531d04bfa8c6d6d85f691877 WatchSource:0}: Error finding container 205702e9ab6b42614f395731faa59bd84cd786fc531d04bfa8c6d6d85f691877: Status 404 returned error can't find the container with id 205702e9ab6b42614f395731faa59bd84cd786fc531d04bfa8c6d6d85f691877 Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.347555 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.418463 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jbtss" event={"ID":"094dc08d-1b1c-4481-8ab4-127dc18a6b01","Type":"ContainerStarted","Data":"8e20f29472d178fa95c163ed1ca8aeb5f560a27c9551e772331446d1e56ad2cc"} Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.421931 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6drxf" event={"ID":"7979a378-800d-4749-bc4b-98f4c85b2624","Type":"ContainerStarted","Data":"205702e9ab6b42614f395731faa59bd84cd786fc531d04bfa8c6d6d85f691877"} Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.477704 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lxx2h"] Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.525121 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fc5b-account-create-update-jjhh4"] Dec 11 08:41:36 crc kubenswrapper[4992]: W1211 08:41:36.529596 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4aa3c025_0da1_4697_bba1_57cb62d804e5.slice/crio-8fa8bed833b8c93bccc842a99ccdb5897c848a2bb4344f5a1f2a82c40e95198d WatchSource:0}: Error finding container 8fa8bed833b8c93bccc842a99ccdb5897c848a2bb4344f5a1f2a82c40e95198d: Status 404 returned error can't find the container with id 8fa8bed833b8c93bccc842a99ccdb5897c848a2bb4344f5a1f2a82c40e95198d Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.705301 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-35b4-account-create-update-hxsnq"] Dec 11 08:41:36 crc kubenswrapper[4992]: I1211 08:41:36.845178 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b044-account-create-update-d7xvv"] Dec 11 08:41:36 crc kubenswrapper[4992]: W1211 08:41:36.849575 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab2a216d_05f7_4862_9e26_3e7ca9c6b56c.slice/crio-ca61ad48a2a3599818ecee9b4dea01300f6ffa019a0c48e6d2d1add30c149ebd WatchSource:0}: Error finding container ca61ad48a2a3599818ecee9b4dea01300f6ffa019a0c48e6d2d1add30c149ebd: Status 404 returned error can't find the container with id ca61ad48a2a3599818ecee9b4dea01300f6ffa019a0c48e6d2d1add30c149ebd Dec 11 08:41:37 crc kubenswrapper[4992]: I1211 08:41:37.428993 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fc5b-account-create-update-jjhh4" event={"ID":"c3d56e1b-849a-4c3e-b588-fb052a8bfb46","Type":"ContainerStarted","Data":"a2ce3a6e62852f4f299525b0db96e68819b9daee845eea0bf8ac109e8e031105"} Dec 11 08:41:37 crc kubenswrapper[4992]: I1211 08:41:37.431155 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lxx2h" event={"ID":"4aa3c025-0da1-4697-bba1-57cb62d804e5","Type":"ContainerStarted","Data":"8fa8bed833b8c93bccc842a99ccdb5897c848a2bb4344f5a1f2a82c40e95198d"} Dec 11 08:41:37 crc kubenswrapper[4992]: I1211 08:41:37.432537 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6drxf" event={"ID":"7979a378-800d-4749-bc4b-98f4c85b2624","Type":"ContainerStarted","Data":"51110a2005ba0d8f37798fbe9e79f67a055f99db12ea65dc5ce346dd00afc4dc"} Dec 11 08:41:37 crc kubenswrapper[4992]: I1211 08:41:37.435161 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jbtss" event={"ID":"094dc08d-1b1c-4481-8ab4-127dc18a6b01","Type":"ContainerStarted","Data":"4aafc1904c8dfbc12f0e49b3def15fd0e5d20b6addc1c1f512c58e7af1fbeb14"} Dec 11 08:41:37 crc kubenswrapper[4992]: I1211 08:41:37.438097 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-35b4-account-create-update-hxsnq" event={"ID":"59aaff84-4003-4bf1-ba6b-c1dbadc40702","Type":"ContainerStarted","Data":"e2a24b20f9dd74e95d573a2ebf2f3478bf2cf743f64e0f9ebd8124f3426ffe95"} Dec 11 08:41:37 crc kubenswrapper[4992]: I1211 08:41:37.439668 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b044-account-create-update-d7xvv" event={"ID":"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c","Type":"ContainerStarted","Data":"ca61ad48a2a3599818ecee9b4dea01300f6ffa019a0c48e6d2d1add30c149ebd"} Dec 11 08:41:37 crc kubenswrapper[4992]: I1211 08:41:37.449931 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-6drxf" podStartSLOduration=2.449907309 podStartE2EDuration="2.449907309s" podCreationTimestamp="2025-12-11 08:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:37.445997644 +0000 UTC m=+1121.705471570" watchObservedRunningTime="2025-12-11 08:41:37.449907309 +0000 UTC m=+1121.709381235" Dec 11 08:41:37 crc kubenswrapper[4992]: I1211 08:41:37.484308 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-jbtss" podStartSLOduration=2.484289992 podStartE2EDuration="2.484289992s" podCreationTimestamp="2025-12-11 08:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:37.470538545 +0000 UTC m=+1121.730012551" watchObservedRunningTime="2025-12-11 08:41:37.484289992 +0000 UTC m=+1121.743763918" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.020070 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-v24w2"] Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.021793 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.036578 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v24w2"] Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.095726 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a2c64c-7a55-413d-ab90-c79bf73b9951-operator-scripts\") pod \"keystone-db-create-v24w2\" (UID: \"33a2c64c-7a55-413d-ab90-c79bf73b9951\") " pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.095973 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tzcr\" (UniqueName: \"kubernetes.io/projected/33a2c64c-7a55-413d-ab90-c79bf73b9951-kube-api-access-8tzcr\") pod \"keystone-db-create-v24w2\" (UID: \"33a2c64c-7a55-413d-ab90-c79bf73b9951\") " pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.108535 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6c6e-account-create-update-77t4h"] Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.109424 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c6e-account-create-update-77t4h"] Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.109503 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.111628 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.197647 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff393885-17ba-4d2f-a3ea-b74533842367-operator-scripts\") pod \"keystone-6c6e-account-create-update-77t4h\" (UID: \"ff393885-17ba-4d2f-a3ea-b74533842367\") " pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.197939 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qv7d\" (UniqueName: \"kubernetes.io/projected/ff393885-17ba-4d2f-a3ea-b74533842367-kube-api-access-4qv7d\") pod \"keystone-6c6e-account-create-update-77t4h\" (UID: \"ff393885-17ba-4d2f-a3ea-b74533842367\") " pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.198402 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a2c64c-7a55-413d-ab90-c79bf73b9951-operator-scripts\") pod \"keystone-db-create-v24w2\" (UID: \"33a2c64c-7a55-413d-ab90-c79bf73b9951\") " pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.198539 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tzcr\" (UniqueName: \"kubernetes.io/projected/33a2c64c-7a55-413d-ab90-c79bf73b9951-kube-api-access-8tzcr\") pod \"keystone-db-create-v24w2\" (UID: \"33a2c64c-7a55-413d-ab90-c79bf73b9951\") " pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.199918 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a2c64c-7a55-413d-ab90-c79bf73b9951-operator-scripts\") pod \"keystone-db-create-v24w2\" (UID: \"33a2c64c-7a55-413d-ab90-c79bf73b9951\") " pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.220706 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tzcr\" (UniqueName: \"kubernetes.io/projected/33a2c64c-7a55-413d-ab90-c79bf73b9951-kube-api-access-8tzcr\") pod \"keystone-db-create-v24w2\" (UID: \"33a2c64c-7a55-413d-ab90-c79bf73b9951\") " pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.300414 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff393885-17ba-4d2f-a3ea-b74533842367-operator-scripts\") pod \"keystone-6c6e-account-create-update-77t4h\" (UID: \"ff393885-17ba-4d2f-a3ea-b74533842367\") " pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.300531 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qv7d\" (UniqueName: \"kubernetes.io/projected/ff393885-17ba-4d2f-a3ea-b74533842367-kube-api-access-4qv7d\") pod \"keystone-6c6e-account-create-update-77t4h\" (UID: \"ff393885-17ba-4d2f-a3ea-b74533842367\") " pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.301601 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff393885-17ba-4d2f-a3ea-b74533842367-operator-scripts\") pod \"keystone-6c6e-account-create-update-77t4h\" (UID: \"ff393885-17ba-4d2f-a3ea-b74533842367\") " pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.319873 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qv7d\" (UniqueName: \"kubernetes.io/projected/ff393885-17ba-4d2f-a3ea-b74533842367-kube-api-access-4qv7d\") pod \"keystone-6c6e-account-create-update-77t4h\" (UID: \"ff393885-17ba-4d2f-a3ea-b74533842367\") " pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.334693 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4mn8r"] Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.336032 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.342602 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4mn8r"] Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.346700 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.402660 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2dv2\" (UniqueName: \"kubernetes.io/projected/2c61d47b-62bb-48b1-83ae-1aa375e422cd-kube-api-access-m2dv2\") pod \"placement-db-create-4mn8r\" (UID: \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\") " pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.402950 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c61d47b-62bb-48b1-83ae-1aa375e422cd-operator-scripts\") pod \"placement-db-create-4mn8r\" (UID: \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\") " pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.435324 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.455578 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lxx2h" event={"ID":"4aa3c025-0da1-4697-bba1-57cb62d804e5","Type":"ContainerStarted","Data":"ab7ec2ba9fc648b014a801ed6425f22145cff02acbca3a012c445200dcb98044"} Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.465003 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4466-account-create-update-ml5rl"] Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.466171 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.469765 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.475131 4992 generic.go:334] "Generic (PLEG): container finished" podID="7979a378-800d-4749-bc4b-98f4c85b2624" containerID="51110a2005ba0d8f37798fbe9e79f67a055f99db12ea65dc5ce346dd00afc4dc" exitCode=0 Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.475355 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6drxf" event={"ID":"7979a378-800d-4749-bc4b-98f4c85b2624","Type":"ContainerDied","Data":"51110a2005ba0d8f37798fbe9e79f67a055f99db12ea65dc5ce346dd00afc4dc"} Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.486230 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4466-account-create-update-ml5rl"] Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.493510 4992 generic.go:334] "Generic (PLEG): container finished" podID="094dc08d-1b1c-4481-8ab4-127dc18a6b01" containerID="4aafc1904c8dfbc12f0e49b3def15fd0e5d20b6addc1c1f512c58e7af1fbeb14" exitCode=0 Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.493656 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jbtss" event={"ID":"094dc08d-1b1c-4481-8ab4-127dc18a6b01","Type":"ContainerDied","Data":"4aafc1904c8dfbc12f0e49b3def15fd0e5d20b6addc1c1f512c58e7af1fbeb14"} Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.494424 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-lxx2h" podStartSLOduration=3.494408641 podStartE2EDuration="3.494408641s" podCreationTimestamp="2025-12-11 08:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:38.473066578 +0000 UTC m=+1122.732540504" watchObservedRunningTime="2025-12-11 08:41:38.494408641 +0000 UTC m=+1122.753882557" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.506277 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcsvf\" (UniqueName: \"kubernetes.io/projected/32320aa4-106e-4dee-89f2-e5034dde3022-kube-api-access-vcsvf\") pod \"placement-4466-account-create-update-ml5rl\" (UID: \"32320aa4-106e-4dee-89f2-e5034dde3022\") " pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.508393 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32320aa4-106e-4dee-89f2-e5034dde3022-operator-scripts\") pod \"placement-4466-account-create-update-ml5rl\" (UID: \"32320aa4-106e-4dee-89f2-e5034dde3022\") " pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.508663 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2dv2\" (UniqueName: \"kubernetes.io/projected/2c61d47b-62bb-48b1-83ae-1aa375e422cd-kube-api-access-m2dv2\") pod \"placement-db-create-4mn8r\" (UID: \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\") " pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.508711 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c61d47b-62bb-48b1-83ae-1aa375e422cd-operator-scripts\") pod \"placement-db-create-4mn8r\" (UID: \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\") " pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.509541 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c61d47b-62bb-48b1-83ae-1aa375e422cd-operator-scripts\") pod \"placement-db-create-4mn8r\" (UID: \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\") " pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.524855 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-35b4-account-create-update-hxsnq" event={"ID":"59aaff84-4003-4bf1-ba6b-c1dbadc40702","Type":"ContainerStarted","Data":"d0acfb3a8972eada88bed129ea56fbb0e97b97f25fa695fdab18afa6d6edc59f"} Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.536006 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b044-account-create-update-d7xvv" event={"ID":"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c","Type":"ContainerStarted","Data":"8bd179f404363500f65e78de070e89d64d789eba0c48237d92df54ae2f59999e"} Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.539006 4992 generic.go:334] "Generic (PLEG): container finished" podID="c3d56e1b-849a-4c3e-b588-fb052a8bfb46" containerID="e79f58df599ab8db845363e3e3c6dd53fe4bd52b35c34fd7bff318a703d2a05a" exitCode=0 Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.539052 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fc5b-account-create-update-jjhh4" event={"ID":"c3d56e1b-849a-4c3e-b588-fb052a8bfb46","Type":"ContainerDied","Data":"e79f58df599ab8db845363e3e3c6dd53fe4bd52b35c34fd7bff318a703d2a05a"} Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.540512 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2dv2\" (UniqueName: \"kubernetes.io/projected/2c61d47b-62bb-48b1-83ae-1aa375e422cd-kube-api-access-m2dv2\") pod \"placement-db-create-4mn8r\" (UID: \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\") " pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.575373 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-35b4-account-create-update-hxsnq" podStartSLOduration=3.5753479649999997 podStartE2EDuration="3.575347965s" podCreationTimestamp="2025-12-11 08:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:38.558357019 +0000 UTC m=+1122.817830935" watchObservedRunningTime="2025-12-11 08:41:38.575347965 +0000 UTC m=+1122.834821891" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.612892 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcsvf\" (UniqueName: \"kubernetes.io/projected/32320aa4-106e-4dee-89f2-e5034dde3022-kube-api-access-vcsvf\") pod \"placement-4466-account-create-update-ml5rl\" (UID: \"32320aa4-106e-4dee-89f2-e5034dde3022\") " pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.612950 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32320aa4-106e-4dee-89f2-e5034dde3022-operator-scripts\") pod \"placement-4466-account-create-update-ml5rl\" (UID: \"32320aa4-106e-4dee-89f2-e5034dde3022\") " pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.617281 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b044-account-create-update-d7xvv" podStartSLOduration=3.617248712 podStartE2EDuration="3.617248712s" podCreationTimestamp="2025-12-11 08:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:38.611718657 +0000 UTC m=+1122.871192583" watchObservedRunningTime="2025-12-11 08:41:38.617248712 +0000 UTC m=+1122.876722638" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.619237 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32320aa4-106e-4dee-89f2-e5034dde3022-operator-scripts\") pod \"placement-4466-account-create-update-ml5rl\" (UID: \"32320aa4-106e-4dee-89f2-e5034dde3022\") " pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.638543 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcsvf\" (UniqueName: \"kubernetes.io/projected/32320aa4-106e-4dee-89f2-e5034dde3022-kube-api-access-vcsvf\") pod \"placement-4466-account-create-update-ml5rl\" (UID: \"32320aa4-106e-4dee-89f2-e5034dde3022\") " pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.801489 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c6e-account-create-update-77t4h"] Dec 11 08:41:38 crc kubenswrapper[4992]: W1211 08:41:38.810489 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff393885_17ba_4d2f_a3ea_b74533842367.slice/crio-bcde57c4edf990ab00e850c0f4afd9c8cd5d6d2312fab03af65dfff157bc7a47 WatchSource:0}: Error finding container bcde57c4edf990ab00e850c0f4afd9c8cd5d6d2312fab03af65dfff157bc7a47: Status 404 returned error can't find the container with id bcde57c4edf990ab00e850c0f4afd9c8cd5d6d2312fab03af65dfff157bc7a47 Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.816235 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:38 crc kubenswrapper[4992]: E1211 08:41:38.816456 4992 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 08:41:38 crc kubenswrapper[4992]: E1211 08:41:38.816487 4992 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 08:41:38 crc kubenswrapper[4992]: E1211 08:41:38.816534 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift podName:47851b57-2a65-4a8a-b2a2-f01a5a2d7833 nodeName:}" failed. No retries permitted until 2025-12-11 08:41:54.816518556 +0000 UTC m=+1139.075992482 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift") pod "swift-storage-0" (UID: "47851b57-2a65-4a8a-b2a2-f01a5a2d7833") : configmap "swift-ring-files" not found Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.825602 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.835461 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:38 crc kubenswrapper[4992]: I1211 08:41:38.902954 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v24w2"] Dec 11 08:41:38 crc kubenswrapper[4992]: W1211 08:41:38.919768 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33a2c64c_7a55_413d_ab90_c79bf73b9951.slice/crio-e8230f119a1124185a2852b377428a0d6777560fdae92644e02b3f30d4dfbfa5 WatchSource:0}: Error finding container e8230f119a1124185a2852b377428a0d6777560fdae92644e02b3f30d4dfbfa5: Status 404 returned error can't find the container with id e8230f119a1124185a2852b377428a0d6777560fdae92644e02b3f30d4dfbfa5 Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.297956 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4mn8r"] Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.362753 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4466-account-create-update-ml5rl"] Dec 11 08:41:39 crc kubenswrapper[4992]: W1211 08:41:39.370728 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32320aa4_106e_4dee_89f2_e5034dde3022.slice/crio-5e720c00b7bb8d8c254f11839bcd1f6adfec76fb132eead2ace5aec67e16169f WatchSource:0}: Error finding container 5e720c00b7bb8d8c254f11839bcd1f6adfec76fb132eead2ace5aec67e16169f: Status 404 returned error can't find the container with id 5e720c00b7bb8d8c254f11839bcd1f6adfec76fb132eead2ace5aec67e16169f Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.548204 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v24w2" event={"ID":"33a2c64c-7a55-413d-ab90-c79bf73b9951","Type":"ContainerStarted","Data":"e8230f119a1124185a2852b377428a0d6777560fdae92644e02b3f30d4dfbfa5"} Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.549380 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4466-account-create-update-ml5rl" event={"ID":"32320aa4-106e-4dee-89f2-e5034dde3022","Type":"ContainerStarted","Data":"5e720c00b7bb8d8c254f11839bcd1f6adfec76fb132eead2ace5aec67e16169f"} Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.550912 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4mn8r" event={"ID":"2c61d47b-62bb-48b1-83ae-1aa375e422cd","Type":"ContainerStarted","Data":"6ddd3a3c89416e0e19a27eb648f7b5586707fca14bf9ca07a5327a2dbe820382"} Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.552108 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c6e-account-create-update-77t4h" event={"ID":"ff393885-17ba-4d2f-a3ea-b74533842367","Type":"ContainerStarted","Data":"bcde57c4edf990ab00e850c0f4afd9c8cd5d6d2312fab03af65dfff157bc7a47"} Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.844221 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.898769 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.930303 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.938351 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7979a378-800d-4749-bc4b-98f4c85b2624-operator-scripts\") pod \"7979a378-800d-4749-bc4b-98f4c85b2624\" (UID: \"7979a378-800d-4749-bc4b-98f4c85b2624\") " Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.938410 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094dc08d-1b1c-4481-8ab4-127dc18a6b01-operator-scripts\") pod \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\" (UID: \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\") " Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.938463 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdqhw\" (UniqueName: \"kubernetes.io/projected/094dc08d-1b1c-4481-8ab4-127dc18a6b01-kube-api-access-sdqhw\") pod \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\" (UID: \"094dc08d-1b1c-4481-8ab4-127dc18a6b01\") " Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.938483 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kg4z\" (UniqueName: \"kubernetes.io/projected/7979a378-800d-4749-bc4b-98f4c85b2624-kube-api-access-4kg4z\") pod \"7979a378-800d-4749-bc4b-98f4c85b2624\" (UID: \"7979a378-800d-4749-bc4b-98f4c85b2624\") " Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.939862 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/094dc08d-1b1c-4481-8ab4-127dc18a6b01-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "094dc08d-1b1c-4481-8ab4-127dc18a6b01" (UID: "094dc08d-1b1c-4481-8ab4-127dc18a6b01"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.940302 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7979a378-800d-4749-bc4b-98f4c85b2624-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7979a378-800d-4749-bc4b-98f4c85b2624" (UID: "7979a378-800d-4749-bc4b-98f4c85b2624"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.946926 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7979a378-800d-4749-bc4b-98f4c85b2624-kube-api-access-4kg4z" (OuterVolumeSpecName: "kube-api-access-4kg4z") pod "7979a378-800d-4749-bc4b-98f4c85b2624" (UID: "7979a378-800d-4749-bc4b-98f4c85b2624"). InnerVolumeSpecName "kube-api-access-4kg4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:39 crc kubenswrapper[4992]: I1211 08:41:39.954419 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/094dc08d-1b1c-4481-8ab4-127dc18a6b01-kube-api-access-sdqhw" (OuterVolumeSpecName: "kube-api-access-sdqhw") pod "094dc08d-1b1c-4481-8ab4-127dc18a6b01" (UID: "094dc08d-1b1c-4481-8ab4-127dc18a6b01"). InnerVolumeSpecName "kube-api-access-sdqhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.039923 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-operator-scripts\") pod \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\" (UID: \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\") " Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.040163 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhz8c\" (UniqueName: \"kubernetes.io/projected/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-kube-api-access-lhz8c\") pod \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\" (UID: \"c3d56e1b-849a-4c3e-b588-fb052a8bfb46\") " Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.040592 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7979a378-800d-4749-bc4b-98f4c85b2624-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.040607 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/094dc08d-1b1c-4481-8ab4-127dc18a6b01-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.040616 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdqhw\" (UniqueName: \"kubernetes.io/projected/094dc08d-1b1c-4481-8ab4-127dc18a6b01-kube-api-access-sdqhw\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.040627 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kg4z\" (UniqueName: \"kubernetes.io/projected/7979a378-800d-4749-bc4b-98f4c85b2624-kube-api-access-4kg4z\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.041189 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3d56e1b-849a-4c3e-b588-fb052a8bfb46" (UID: "c3d56e1b-849a-4c3e-b588-fb052a8bfb46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.043345 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-kube-api-access-lhz8c" (OuterVolumeSpecName: "kube-api-access-lhz8c") pod "c3d56e1b-849a-4c3e-b588-fb052a8bfb46" (UID: "c3d56e1b-849a-4c3e-b588-fb052a8bfb46"). InnerVolumeSpecName "kube-api-access-lhz8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.142393 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.142425 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhz8c\" (UniqueName: \"kubernetes.io/projected/c3d56e1b-849a-4c3e-b588-fb052a8bfb46-kube-api-access-lhz8c\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.570560 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-jbtss" event={"ID":"094dc08d-1b1c-4481-8ab4-127dc18a6b01","Type":"ContainerDied","Data":"8e20f29472d178fa95c163ed1ca8aeb5f560a27c9551e772331446d1e56ad2cc"} Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.570623 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e20f29472d178fa95c163ed1ca8aeb5f560a27c9551e772331446d1e56ad2cc" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.570706 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-jbtss" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.572927 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fc5b-account-create-update-jjhh4" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.572919 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fc5b-account-create-update-jjhh4" event={"ID":"c3d56e1b-849a-4c3e-b588-fb052a8bfb46","Type":"ContainerDied","Data":"a2ce3a6e62852f4f299525b0db96e68819b9daee845eea0bf8ac109e8e031105"} Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.573125 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2ce3a6e62852f4f299525b0db96e68819b9daee845eea0bf8ac109e8e031105" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.575591 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6drxf" event={"ID":"7979a378-800d-4749-bc4b-98f4c85b2624","Type":"ContainerDied","Data":"205702e9ab6b42614f395731faa59bd84cd786fc531d04bfa8c6d6d85f691877"} Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.575619 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="205702e9ab6b42614f395731faa59bd84cd786fc531d04bfa8c6d6d85f691877" Dec 11 08:41:40 crc kubenswrapper[4992]: I1211 08:41:40.575709 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6drxf" Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.584288 4992 generic.go:334] "Generic (PLEG): container finished" podID="ff393885-17ba-4d2f-a3ea-b74533842367" containerID="62433edbe0157e262b502417ef23a173380e9ce0a633628fd4658c68107c6d38" exitCode=0 Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.584360 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c6e-account-create-update-77t4h" event={"ID":"ff393885-17ba-4d2f-a3ea-b74533842367","Type":"ContainerDied","Data":"62433edbe0157e262b502417ef23a173380e9ce0a633628fd4658c68107c6d38"} Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.585883 4992 generic.go:334] "Generic (PLEG): container finished" podID="4aa3c025-0da1-4697-bba1-57cb62d804e5" containerID="ab7ec2ba9fc648b014a801ed6425f22145cff02acbca3a012c445200dcb98044" exitCode=0 Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.585933 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lxx2h" event={"ID":"4aa3c025-0da1-4697-bba1-57cb62d804e5","Type":"ContainerDied","Data":"ab7ec2ba9fc648b014a801ed6425f22145cff02acbca3a012c445200dcb98044"} Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.587302 4992 generic.go:334] "Generic (PLEG): container finished" podID="33a2c64c-7a55-413d-ab90-c79bf73b9951" containerID="7c50a99b24522578ce9c4f576971a63cd3838bec331c7d3bc1a5aa84b6ffc5ad" exitCode=0 Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.587366 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v24w2" event={"ID":"33a2c64c-7a55-413d-ab90-c79bf73b9951","Type":"ContainerDied","Data":"7c50a99b24522578ce9c4f576971a63cd3838bec331c7d3bc1a5aa84b6ffc5ad"} Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.589001 4992 generic.go:334] "Generic (PLEG): container finished" podID="32320aa4-106e-4dee-89f2-e5034dde3022" containerID="8d4af0619a669f75471429717a489568fd9be95d1f2e35e4641b967b2b4f8218" exitCode=0 Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.589244 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4466-account-create-update-ml5rl" event={"ID":"32320aa4-106e-4dee-89f2-e5034dde3022","Type":"ContainerDied","Data":"8d4af0619a669f75471429717a489568fd9be95d1f2e35e4641b967b2b4f8218"} Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.591026 4992 generic.go:334] "Generic (PLEG): container finished" podID="59aaff84-4003-4bf1-ba6b-c1dbadc40702" containerID="d0acfb3a8972eada88bed129ea56fbb0e97b97f25fa695fdab18afa6d6edc59f" exitCode=0 Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.591102 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-35b4-account-create-update-hxsnq" event={"ID":"59aaff84-4003-4bf1-ba6b-c1dbadc40702","Type":"ContainerDied","Data":"d0acfb3a8972eada88bed129ea56fbb0e97b97f25fa695fdab18afa6d6edc59f"} Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.592779 4992 generic.go:334] "Generic (PLEG): container finished" podID="ab2a216d-05f7-4862-9e26-3e7ca9c6b56c" containerID="8bd179f404363500f65e78de070e89d64d789eba0c48237d92df54ae2f59999e" exitCode=0 Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.592803 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b044-account-create-update-d7xvv" event={"ID":"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c","Type":"ContainerDied","Data":"8bd179f404363500f65e78de070e89d64d789eba0c48237d92df54ae2f59999e"} Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.594338 4992 generic.go:334] "Generic (PLEG): container finished" podID="2c61d47b-62bb-48b1-83ae-1aa375e422cd" containerID="55418836b53fcadc757c12fd247d86a1754aa63e8646a2c326dc649fee032aea" exitCode=0 Dec 11 08:41:41 crc kubenswrapper[4992]: I1211 08:41:41.594369 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4mn8r" event={"ID":"2c61d47b-62bb-48b1-83ae-1aa375e422cd","Type":"ContainerDied","Data":"55418836b53fcadc757c12fd247d86a1754aa63e8646a2c326dc649fee032aea"} Dec 11 08:41:42 crc kubenswrapper[4992]: I1211 08:41:42.917398 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:42 crc kubenswrapper[4992]: I1211 08:41:42.989949 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa3c025-0da1-4697-bba1-57cb62d804e5-operator-scripts\") pod \"4aa3c025-0da1-4697-bba1-57cb62d804e5\" (UID: \"4aa3c025-0da1-4697-bba1-57cb62d804e5\") " Dec 11 08:41:42 crc kubenswrapper[4992]: I1211 08:41:42.990140 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz5sz\" (UniqueName: \"kubernetes.io/projected/4aa3c025-0da1-4697-bba1-57cb62d804e5-kube-api-access-rz5sz\") pod \"4aa3c025-0da1-4697-bba1-57cb62d804e5\" (UID: \"4aa3c025-0da1-4697-bba1-57cb62d804e5\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.000766 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa3c025-0da1-4697-bba1-57cb62d804e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4aa3c025-0da1-4697-bba1-57cb62d804e5" (UID: "4aa3c025-0da1-4697-bba1-57cb62d804e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.000835 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa3c025-0da1-4697-bba1-57cb62d804e5-kube-api-access-rz5sz" (OuterVolumeSpecName: "kube-api-access-rz5sz") pod "4aa3c025-0da1-4697-bba1-57cb62d804e5" (UID: "4aa3c025-0da1-4697-bba1-57cb62d804e5"). InnerVolumeSpecName "kube-api-access-rz5sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.092554 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa3c025-0da1-4697-bba1-57cb62d804e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.092595 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz5sz\" (UniqueName: \"kubernetes.io/projected/4aa3c025-0da1-4697-bba1-57cb62d804e5-kube-api-access-rz5sz\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.269485 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.288956 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.292989 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.305957 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.315682 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.337773 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396370 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4m2\" (UniqueName: \"kubernetes.io/projected/59aaff84-4003-4bf1-ba6b-c1dbadc40702-kube-api-access-7c4m2\") pod \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\" (UID: \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396757 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a2c64c-7a55-413d-ab90-c79bf73b9951-operator-scripts\") pod \"33a2c64c-7a55-413d-ab90-c79bf73b9951\" (UID: \"33a2c64c-7a55-413d-ab90-c79bf73b9951\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396787 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c61d47b-62bb-48b1-83ae-1aa375e422cd-operator-scripts\") pod \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\" (UID: \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396803 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfdr2\" (UniqueName: \"kubernetes.io/projected/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-kube-api-access-sfdr2\") pod \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\" (UID: \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396828 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcsvf\" (UniqueName: \"kubernetes.io/projected/32320aa4-106e-4dee-89f2-e5034dde3022-kube-api-access-vcsvf\") pod \"32320aa4-106e-4dee-89f2-e5034dde3022\" (UID: \"32320aa4-106e-4dee-89f2-e5034dde3022\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396848 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59aaff84-4003-4bf1-ba6b-c1dbadc40702-operator-scripts\") pod \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\" (UID: \"59aaff84-4003-4bf1-ba6b-c1dbadc40702\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396871 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tzcr\" (UniqueName: \"kubernetes.io/projected/33a2c64c-7a55-413d-ab90-c79bf73b9951-kube-api-access-8tzcr\") pod \"33a2c64c-7a55-413d-ab90-c79bf73b9951\" (UID: \"33a2c64c-7a55-413d-ab90-c79bf73b9951\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396921 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-operator-scripts\") pod \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\" (UID: \"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396948 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32320aa4-106e-4dee-89f2-e5034dde3022-operator-scripts\") pod \"32320aa4-106e-4dee-89f2-e5034dde3022\" (UID: \"32320aa4-106e-4dee-89f2-e5034dde3022\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.396980 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qv7d\" (UniqueName: \"kubernetes.io/projected/ff393885-17ba-4d2f-a3ea-b74533842367-kube-api-access-4qv7d\") pod \"ff393885-17ba-4d2f-a3ea-b74533842367\" (UID: \"ff393885-17ba-4d2f-a3ea-b74533842367\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.397003 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2dv2\" (UniqueName: \"kubernetes.io/projected/2c61d47b-62bb-48b1-83ae-1aa375e422cd-kube-api-access-m2dv2\") pod \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\" (UID: \"2c61d47b-62bb-48b1-83ae-1aa375e422cd\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.397088 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff393885-17ba-4d2f-a3ea-b74533842367-operator-scripts\") pod \"ff393885-17ba-4d2f-a3ea-b74533842367\" (UID: \"ff393885-17ba-4d2f-a3ea-b74533842367\") " Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.398107 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff393885-17ba-4d2f-a3ea-b74533842367-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff393885-17ba-4d2f-a3ea-b74533842367" (UID: "ff393885-17ba-4d2f-a3ea-b74533842367"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.398957 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59aaff84-4003-4bf1-ba6b-c1dbadc40702-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59aaff84-4003-4bf1-ba6b-c1dbadc40702" (UID: "59aaff84-4003-4bf1-ba6b-c1dbadc40702"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.399255 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33a2c64c-7a55-413d-ab90-c79bf73b9951-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33a2c64c-7a55-413d-ab90-c79bf73b9951" (UID: "33a2c64c-7a55-413d-ab90-c79bf73b9951"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.399257 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c61d47b-62bb-48b1-83ae-1aa375e422cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c61d47b-62bb-48b1-83ae-1aa375e422cd" (UID: "2c61d47b-62bb-48b1-83ae-1aa375e422cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.399657 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32320aa4-106e-4dee-89f2-e5034dde3022-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32320aa4-106e-4dee-89f2-e5034dde3022" (UID: "32320aa4-106e-4dee-89f2-e5034dde3022"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.399702 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab2a216d-05f7-4862-9e26-3e7ca9c6b56c" (UID: "ab2a216d-05f7-4862-9e26-3e7ca9c6b56c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.403264 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59aaff84-4003-4bf1-ba6b-c1dbadc40702-kube-api-access-7c4m2" (OuterVolumeSpecName: "kube-api-access-7c4m2") pod "59aaff84-4003-4bf1-ba6b-c1dbadc40702" (UID: "59aaff84-4003-4bf1-ba6b-c1dbadc40702"). InnerVolumeSpecName "kube-api-access-7c4m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.403685 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32320aa4-106e-4dee-89f2-e5034dde3022-kube-api-access-vcsvf" (OuterVolumeSpecName: "kube-api-access-vcsvf") pod "32320aa4-106e-4dee-89f2-e5034dde3022" (UID: "32320aa4-106e-4dee-89f2-e5034dde3022"). InnerVolumeSpecName "kube-api-access-vcsvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.403805 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff393885-17ba-4d2f-a3ea-b74533842367-kube-api-access-4qv7d" (OuterVolumeSpecName: "kube-api-access-4qv7d") pod "ff393885-17ba-4d2f-a3ea-b74533842367" (UID: "ff393885-17ba-4d2f-a3ea-b74533842367"). InnerVolumeSpecName "kube-api-access-4qv7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.403844 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-kube-api-access-sfdr2" (OuterVolumeSpecName: "kube-api-access-sfdr2") pod "ab2a216d-05f7-4862-9e26-3e7ca9c6b56c" (UID: "ab2a216d-05f7-4862-9e26-3e7ca9c6b56c"). InnerVolumeSpecName "kube-api-access-sfdr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.403886 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c61d47b-62bb-48b1-83ae-1aa375e422cd-kube-api-access-m2dv2" (OuterVolumeSpecName: "kube-api-access-m2dv2") pod "2c61d47b-62bb-48b1-83ae-1aa375e422cd" (UID: "2c61d47b-62bb-48b1-83ae-1aa375e422cd"). InnerVolumeSpecName "kube-api-access-m2dv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.404285 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a2c64c-7a55-413d-ab90-c79bf73b9951-kube-api-access-8tzcr" (OuterVolumeSpecName: "kube-api-access-8tzcr") pod "33a2c64c-7a55-413d-ab90-c79bf73b9951" (UID: "33a2c64c-7a55-413d-ab90-c79bf73b9951"). InnerVolumeSpecName "kube-api-access-8tzcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500083 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500174 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32320aa4-106e-4dee-89f2-e5034dde3022-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500192 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qv7d\" (UniqueName: \"kubernetes.io/projected/ff393885-17ba-4d2f-a3ea-b74533842367-kube-api-access-4qv7d\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500209 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2dv2\" (UniqueName: \"kubernetes.io/projected/2c61d47b-62bb-48b1-83ae-1aa375e422cd-kube-api-access-m2dv2\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500225 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff393885-17ba-4d2f-a3ea-b74533842367-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500238 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4m2\" (UniqueName: \"kubernetes.io/projected/59aaff84-4003-4bf1-ba6b-c1dbadc40702-kube-api-access-7c4m2\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500251 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33a2c64c-7a55-413d-ab90-c79bf73b9951-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500265 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c61d47b-62bb-48b1-83ae-1aa375e422cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500280 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfdr2\" (UniqueName: \"kubernetes.io/projected/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c-kube-api-access-sfdr2\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500294 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcsvf\" (UniqueName: \"kubernetes.io/projected/32320aa4-106e-4dee-89f2-e5034dde3022-kube-api-access-vcsvf\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500307 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59aaff84-4003-4bf1-ba6b-c1dbadc40702-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.500320 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tzcr\" (UniqueName: \"kubernetes.io/projected/33a2c64c-7a55-413d-ab90-c79bf73b9951-kube-api-access-8tzcr\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.611618 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b044-account-create-update-d7xvv" event={"ID":"ab2a216d-05f7-4862-9e26-3e7ca9c6b56c","Type":"ContainerDied","Data":"ca61ad48a2a3599818ecee9b4dea01300f6ffa019a0c48e6d2d1add30c149ebd"} Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.611680 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b044-account-create-update-d7xvv" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.611687 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca61ad48a2a3599818ecee9b4dea01300f6ffa019a0c48e6d2d1add30c149ebd" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.615106 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4mn8r" event={"ID":"2c61d47b-62bb-48b1-83ae-1aa375e422cd","Type":"ContainerDied","Data":"6ddd3a3c89416e0e19a27eb648f7b5586707fca14bf9ca07a5327a2dbe820382"} Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.615137 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ddd3a3c89416e0e19a27eb648f7b5586707fca14bf9ca07a5327a2dbe820382" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.615200 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4mn8r" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.618076 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c6e-account-create-update-77t4h" event={"ID":"ff393885-17ba-4d2f-a3ea-b74533842367","Type":"ContainerDied","Data":"bcde57c4edf990ab00e850c0f4afd9c8cd5d6d2312fab03af65dfff157bc7a47"} Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.618122 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcde57c4edf990ab00e850c0f4afd9c8cd5d6d2312fab03af65dfff157bc7a47" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.618184 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c6e-account-create-update-77t4h" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.621099 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lxx2h" event={"ID":"4aa3c025-0da1-4697-bba1-57cb62d804e5","Type":"ContainerDied","Data":"8fa8bed833b8c93bccc842a99ccdb5897c848a2bb4344f5a1f2a82c40e95198d"} Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.621129 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fa8bed833b8c93bccc842a99ccdb5897c848a2bb4344f5a1f2a82c40e95198d" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.621179 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lxx2h" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.622687 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v24w2" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.622691 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v24w2" event={"ID":"33a2c64c-7a55-413d-ab90-c79bf73b9951","Type":"ContainerDied","Data":"e8230f119a1124185a2852b377428a0d6777560fdae92644e02b3f30d4dfbfa5"} Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.622818 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8230f119a1124185a2852b377428a0d6777560fdae92644e02b3f30d4dfbfa5" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.624262 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4466-account-create-update-ml5rl" event={"ID":"32320aa4-106e-4dee-89f2-e5034dde3022","Type":"ContainerDied","Data":"5e720c00b7bb8d8c254f11839bcd1f6adfec76fb132eead2ace5aec67e16169f"} Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.624297 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e720c00b7bb8d8c254f11839bcd1f6adfec76fb132eead2ace5aec67e16169f" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.624360 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4466-account-create-update-ml5rl" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.627516 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-35b4-account-create-update-hxsnq" event={"ID":"59aaff84-4003-4bf1-ba6b-c1dbadc40702","Type":"ContainerDied","Data":"e2a24b20f9dd74e95d573a2ebf2f3478bf2cf743f64e0f9ebd8124f3426ffe95"} Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.627545 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2a24b20f9dd74e95d573a2ebf2f3478bf2cf743f64e0f9ebd8124f3426ffe95" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.627591 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-35b4-account-create-update-hxsnq" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797004 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ssqrt"] Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797390 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a2c64c-7a55-413d-ab90-c79bf73b9951" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797408 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a2c64c-7a55-413d-ab90-c79bf73b9951" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797428 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d56e1b-849a-4c3e-b588-fb052a8bfb46" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797437 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d56e1b-849a-4c3e-b588-fb052a8bfb46" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797452 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa3c025-0da1-4697-bba1-57cb62d804e5" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797459 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa3c025-0da1-4697-bba1-57cb62d804e5" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797478 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff393885-17ba-4d2f-a3ea-b74533842367" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797486 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff393885-17ba-4d2f-a3ea-b74533842367" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797498 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7979a378-800d-4749-bc4b-98f4c85b2624" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797506 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7979a378-800d-4749-bc4b-98f4c85b2624" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797523 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="094dc08d-1b1c-4481-8ab4-127dc18a6b01" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797531 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="094dc08d-1b1c-4481-8ab4-127dc18a6b01" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797547 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32320aa4-106e-4dee-89f2-e5034dde3022" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797555 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="32320aa4-106e-4dee-89f2-e5034dde3022" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797568 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59aaff84-4003-4bf1-ba6b-c1dbadc40702" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797575 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="59aaff84-4003-4bf1-ba6b-c1dbadc40702" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797585 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c61d47b-62bb-48b1-83ae-1aa375e422cd" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797592 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c61d47b-62bb-48b1-83ae-1aa375e422cd" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: E1211 08:41:43.797604 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a216d-05f7-4862-9e26-3e7ca9c6b56c" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797612 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a216d-05f7-4862-9e26-3e7ca9c6b56c" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797863 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa3c025-0da1-4697-bba1-57cb62d804e5" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797902 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c61d47b-62bb-48b1-83ae-1aa375e422cd" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797921 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a2c64c-7a55-413d-ab90-c79bf73b9951" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797939 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff393885-17ba-4d2f-a3ea-b74533842367" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797958 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="59aaff84-4003-4bf1-ba6b-c1dbadc40702" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797972 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7979a378-800d-4749-bc4b-98f4c85b2624" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.797992 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d56e1b-849a-4c3e-b588-fb052a8bfb46" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.798014 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="32320aa4-106e-4dee-89f2-e5034dde3022" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.798030 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="094dc08d-1b1c-4481-8ab4-127dc18a6b01" containerName="mariadb-database-create" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.798042 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a216d-05f7-4862-9e26-3e7ca9c6b56c" containerName="mariadb-account-create-update" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.798777 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.805569 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ssqrt"] Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.907279 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6czr\" (UniqueName: \"kubernetes.io/projected/96e82505-129d-467c-8224-c36229c2da21-kube-api-access-b6czr\") pod \"glance-db-create-ssqrt\" (UID: \"96e82505-129d-467c-8224-c36229c2da21\") " pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.907386 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96e82505-129d-467c-8224-c36229c2da21-operator-scripts\") pod \"glance-db-create-ssqrt\" (UID: \"96e82505-129d-467c-8224-c36229c2da21\") " pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.912794 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-8deb-account-create-update-kqqw2"] Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.913882 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.917960 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 11 08:41:43 crc kubenswrapper[4992]: I1211 08:41:43.921782 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8deb-account-create-update-kqqw2"] Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.008420 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6czr\" (UniqueName: \"kubernetes.io/projected/96e82505-129d-467c-8224-c36229c2da21-kube-api-access-b6czr\") pod \"glance-db-create-ssqrt\" (UID: \"96e82505-129d-467c-8224-c36229c2da21\") " pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.008473 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpd9g\" (UniqueName: \"kubernetes.io/projected/7f6555c6-b872-4fde-be87-155133d67f13-kube-api-access-vpd9g\") pod \"glance-8deb-account-create-update-kqqw2\" (UID: \"7f6555c6-b872-4fde-be87-155133d67f13\") " pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.008545 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6555c6-b872-4fde-be87-155133d67f13-operator-scripts\") pod \"glance-8deb-account-create-update-kqqw2\" (UID: \"7f6555c6-b872-4fde-be87-155133d67f13\") " pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.008687 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96e82505-129d-467c-8224-c36229c2da21-operator-scripts\") pod \"glance-db-create-ssqrt\" (UID: \"96e82505-129d-467c-8224-c36229c2da21\") " pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.029610 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6czr\" (UniqueName: \"kubernetes.io/projected/96e82505-129d-467c-8224-c36229c2da21-kube-api-access-b6czr\") pod \"glance-db-create-ssqrt\" (UID: \"96e82505-129d-467c-8224-c36229c2da21\") " pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.111322 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpd9g\" (UniqueName: \"kubernetes.io/projected/7f6555c6-b872-4fde-be87-155133d67f13-kube-api-access-vpd9g\") pod \"glance-8deb-account-create-update-kqqw2\" (UID: \"7f6555c6-b872-4fde-be87-155133d67f13\") " pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.111422 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6555c6-b872-4fde-be87-155133d67f13-operator-scripts\") pod \"glance-8deb-account-create-update-kqqw2\" (UID: \"7f6555c6-b872-4fde-be87-155133d67f13\") " pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.112195 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6555c6-b872-4fde-be87-155133d67f13-operator-scripts\") pod \"glance-8deb-account-create-update-kqqw2\" (UID: \"7f6555c6-b872-4fde-be87-155133d67f13\") " pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.127331 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpd9g\" (UniqueName: \"kubernetes.io/projected/7f6555c6-b872-4fde-be87-155133d67f13-kube-api-access-vpd9g\") pod \"glance-8deb-account-create-update-kqqw2\" (UID: \"7f6555c6-b872-4fde-be87-155133d67f13\") " pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.230951 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.500832 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-djp6h" podUID="43b8eb34-f000-49af-bcf9-7507f85afd2b" containerName="ovn-controller" probeResult="failure" output=< Dec 11 08:41:44 crc kubenswrapper[4992]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 11 08:41:44 crc kubenswrapper[4992]: > Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.520240 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.520739 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-sw28r" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.571586 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96e82505-129d-467c-8224-c36229c2da21-operator-scripts\") pod \"glance-db-create-ssqrt\" (UID: \"96e82505-129d-467c-8224-c36229c2da21\") " pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.657997 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-8deb-account-create-update-kqqw2"] Dec 11 08:41:44 crc kubenswrapper[4992]: W1211 08:41:44.659537 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f6555c6_b872_4fde_be87_155133d67f13.slice/crio-889aa8c52ecddce689d96e3ea790cfdff93aa033ac60c015cac8b2cdc47377ec WatchSource:0}: Error finding container 889aa8c52ecddce689d96e3ea790cfdff93aa033ac60c015cac8b2cdc47377ec: Status 404 returned error can't find the container with id 889aa8c52ecddce689d96e3ea790cfdff93aa033ac60c015cac8b2cdc47377ec Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.724057 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.730624 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-djp6h-config-4lb2m"] Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.732359 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.735399 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.788775 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djp6h-config-4lb2m"] Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.822310 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.822384 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-scripts\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.822438 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-additional-scripts\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.822495 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run-ovn\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.822518 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-log-ovn\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.822577 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt9r6\" (UniqueName: \"kubernetes.io/projected/170a0782-073b-4643-b13a-5b43a524c51f-kube-api-access-lt9r6\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.923814 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run-ovn\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.923866 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-log-ovn\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.923899 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt9r6\" (UniqueName: \"kubernetes.io/projected/170a0782-073b-4643-b13a-5b43a524c51f-kube-api-access-lt9r6\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.924026 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.924050 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-scripts\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.924084 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-additional-scripts\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.924211 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run-ovn\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.924579 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.924908 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-additional-scripts\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.926465 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-scripts\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.926544 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-log-ovn\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:44 crc kubenswrapper[4992]: I1211 08:41:44.958764 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt9r6\" (UniqueName: \"kubernetes.io/projected/170a0782-073b-4643-b13a-5b43a524c51f-kube-api-access-lt9r6\") pod \"ovn-controller-djp6h-config-4lb2m\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:45 crc kubenswrapper[4992]: I1211 08:41:45.079215 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:45 crc kubenswrapper[4992]: I1211 08:41:45.199291 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ssqrt"] Dec 11 08:41:45 crc kubenswrapper[4992]: I1211 08:41:45.450902 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 11 08:41:45 crc kubenswrapper[4992]: I1211 08:41:45.550910 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-djp6h-config-4lb2m"] Dec 11 08:41:45 crc kubenswrapper[4992]: W1211 08:41:45.554341 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170a0782_073b_4643_b13a_5b43a524c51f.slice/crio-a3e701ce713dd007250748b210b28cc30f5403d561c58604282b988044559316 WatchSource:0}: Error finding container a3e701ce713dd007250748b210b28cc30f5403d561c58604282b988044559316: Status 404 returned error can't find the container with id a3e701ce713dd007250748b210b28cc30f5403d561c58604282b988044559316 Dec 11 08:41:45 crc kubenswrapper[4992]: I1211 08:41:45.681679 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ssqrt" event={"ID":"96e82505-129d-467c-8224-c36229c2da21","Type":"ContainerStarted","Data":"37232d7f8e7807f089109b2a260000b3b5d4cfb6caf2180e82ffc31a711e3e38"} Dec 11 08:41:45 crc kubenswrapper[4992]: I1211 08:41:45.684047 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djp6h-config-4lb2m" event={"ID":"170a0782-073b-4643-b13a-5b43a524c51f","Type":"ContainerStarted","Data":"a3e701ce713dd007250748b210b28cc30f5403d561c58604282b988044559316"} Dec 11 08:41:45 crc kubenswrapper[4992]: I1211 08:41:45.689487 4992 generic.go:334] "Generic (PLEG): container finished" podID="7f6555c6-b872-4fde-be87-155133d67f13" containerID="84f72e32e4a1536c03b6e45dd1265c587a75876ddad6a4fe80a697dec16a67ff" exitCode=0 Dec 11 08:41:45 crc kubenswrapper[4992]: I1211 08:41:45.689548 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8deb-account-create-update-kqqw2" event={"ID":"7f6555c6-b872-4fde-be87-155133d67f13","Type":"ContainerDied","Data":"84f72e32e4a1536c03b6e45dd1265c587a75876ddad6a4fe80a697dec16a67ff"} Dec 11 08:41:45 crc kubenswrapper[4992]: I1211 08:41:45.689596 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8deb-account-create-update-kqqw2" event={"ID":"7f6555c6-b872-4fde-be87-155133d67f13","Type":"ContainerStarted","Data":"889aa8c52ecddce689d96e3ea790cfdff93aa033ac60c015cac8b2cdc47377ec"} Dec 11 08:41:46 crc kubenswrapper[4992]: I1211 08:41:46.702715 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ssqrt" event={"ID":"96e82505-129d-467c-8224-c36229c2da21","Type":"ContainerStarted","Data":"bf1b6e43f267257a954f9c6ca1c55078d8b53f617af9e0f67859d0fc0a9b408b"} Dec 11 08:41:46 crc kubenswrapper[4992]: I1211 08:41:46.706989 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djp6h-config-4lb2m" event={"ID":"170a0782-073b-4643-b13a-5b43a524c51f","Type":"ContainerStarted","Data":"9542751233af15fde4b73199cfba6b71a6fd46915b44587491c57c229d08adaf"} Dec 11 08:41:46 crc kubenswrapper[4992]: I1211 08:41:46.727373 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-ssqrt" podStartSLOduration=3.727350453 podStartE2EDuration="3.727350453s" podCreationTimestamp="2025-12-11 08:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:41:46.725081799 +0000 UTC m=+1130.984555735" watchObservedRunningTime="2025-12-11 08:41:46.727350453 +0000 UTC m=+1130.986824379" Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.160829 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.258623 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpd9g\" (UniqueName: \"kubernetes.io/projected/7f6555c6-b872-4fde-be87-155133d67f13-kube-api-access-vpd9g\") pod \"7f6555c6-b872-4fde-be87-155133d67f13\" (UID: \"7f6555c6-b872-4fde-be87-155133d67f13\") " Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.259013 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6555c6-b872-4fde-be87-155133d67f13-operator-scripts\") pod \"7f6555c6-b872-4fde-be87-155133d67f13\" (UID: \"7f6555c6-b872-4fde-be87-155133d67f13\") " Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.259900 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6555c6-b872-4fde-be87-155133d67f13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f6555c6-b872-4fde-be87-155133d67f13" (UID: "7f6555c6-b872-4fde-be87-155133d67f13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.265571 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6555c6-b872-4fde-be87-155133d67f13-kube-api-access-vpd9g" (OuterVolumeSpecName: "kube-api-access-vpd9g") pod "7f6555c6-b872-4fde-be87-155133d67f13" (UID: "7f6555c6-b872-4fde-be87-155133d67f13"). InnerVolumeSpecName "kube-api-access-vpd9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.360857 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpd9g\" (UniqueName: \"kubernetes.io/projected/7f6555c6-b872-4fde-be87-155133d67f13-kube-api-access-vpd9g\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.360902 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f6555c6-b872-4fde-be87-155133d67f13-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.717569 4992 generic.go:334] "Generic (PLEG): container finished" podID="96e82505-129d-467c-8224-c36229c2da21" containerID="bf1b6e43f267257a954f9c6ca1c55078d8b53f617af9e0f67859d0fc0a9b408b" exitCode=0 Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.717772 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ssqrt" event={"ID":"96e82505-129d-467c-8224-c36229c2da21","Type":"ContainerDied","Data":"bf1b6e43f267257a954f9c6ca1c55078d8b53f617af9e0f67859d0fc0a9b408b"} Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.720697 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-8deb-account-create-update-kqqw2" Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.720711 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-8deb-account-create-update-kqqw2" event={"ID":"7f6555c6-b872-4fde-be87-155133d67f13","Type":"ContainerDied","Data":"889aa8c52ecddce689d96e3ea790cfdff93aa033ac60c015cac8b2cdc47377ec"} Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.721175 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889aa8c52ecddce689d96e3ea790cfdff93aa033ac60c015cac8b2cdc47377ec" Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.722760 4992 generic.go:334] "Generic (PLEG): container finished" podID="170a0782-073b-4643-b13a-5b43a524c51f" containerID="9542751233af15fde4b73199cfba6b71a6fd46915b44587491c57c229d08adaf" exitCode=0 Dec 11 08:41:47 crc kubenswrapper[4992]: I1211 08:41:47.722802 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djp6h-config-4lb2m" event={"ID":"170a0782-073b-4643-b13a-5b43a524c51f","Type":"ContainerDied","Data":"9542751233af15fde4b73199cfba6b71a6fd46915b44587491c57c229d08adaf"} Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.788294 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rsks9"] Dec 11 08:41:48 crc kubenswrapper[4992]: E1211 08:41:48.789141 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6555c6-b872-4fde-be87-155133d67f13" containerName="mariadb-account-create-update" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.789162 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6555c6-b872-4fde-be87-155133d67f13" containerName="mariadb-account-create-update" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.789383 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f6555c6-b872-4fde-be87-155133d67f13" containerName="mariadb-account-create-update" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.790137 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.792605 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.792605 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.792977 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.795247 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xs4g8" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.805883 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rsks9"] Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.885121 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-config-data\") pod \"keystone-db-sync-rsks9\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.885290 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xdkx\" (UniqueName: \"kubernetes.io/projected/df9e9336-fd88-46fa-9a9e-2533e27df0ed-kube-api-access-5xdkx\") pod \"keystone-db-sync-rsks9\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.885335 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-combined-ca-bundle\") pod \"keystone-db-sync-rsks9\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.987494 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xdkx\" (UniqueName: \"kubernetes.io/projected/df9e9336-fd88-46fa-9a9e-2533e27df0ed-kube-api-access-5xdkx\") pod \"keystone-db-sync-rsks9\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.987546 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-combined-ca-bundle\") pod \"keystone-db-sync-rsks9\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.987610 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-config-data\") pod \"keystone-db-sync-rsks9\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.992668 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-config-data\") pod \"keystone-db-sync-rsks9\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:48 crc kubenswrapper[4992]: I1211 08:41:48.993375 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-combined-ca-bundle\") pod \"keystone-db-sync-rsks9\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.011837 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xdkx\" (UniqueName: \"kubernetes.io/projected/df9e9336-fd88-46fa-9a9e-2533e27df0ed-kube-api-access-5xdkx\") pod \"keystone-db-sync-rsks9\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.135554 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rsks9" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.149660 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.190678 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96e82505-129d-467c-8224-c36229c2da21-operator-scripts\") pod \"96e82505-129d-467c-8224-c36229c2da21\" (UID: \"96e82505-129d-467c-8224-c36229c2da21\") " Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.190745 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6czr\" (UniqueName: \"kubernetes.io/projected/96e82505-129d-467c-8224-c36229c2da21-kube-api-access-b6czr\") pod \"96e82505-129d-467c-8224-c36229c2da21\" (UID: \"96e82505-129d-467c-8224-c36229c2da21\") " Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.191502 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96e82505-129d-467c-8224-c36229c2da21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96e82505-129d-467c-8224-c36229c2da21" (UID: "96e82505-129d-467c-8224-c36229c2da21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.195894 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e82505-129d-467c-8224-c36229c2da21-kube-api-access-b6czr" (OuterVolumeSpecName: "kube-api-access-b6czr") pod "96e82505-129d-467c-8224-c36229c2da21" (UID: "96e82505-129d-467c-8224-c36229c2da21"). InnerVolumeSpecName "kube-api-access-b6czr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.232439 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292043 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run-ovn\") pod \"170a0782-073b-4643-b13a-5b43a524c51f\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292124 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-scripts\") pod \"170a0782-073b-4643-b13a-5b43a524c51f\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292147 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-additional-scripts\") pod \"170a0782-073b-4643-b13a-5b43a524c51f\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292162 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-log-ovn\") pod \"170a0782-073b-4643-b13a-5b43a524c51f\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292226 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run\") pod \"170a0782-073b-4643-b13a-5b43a524c51f\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292289 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt9r6\" (UniqueName: \"kubernetes.io/projected/170a0782-073b-4643-b13a-5b43a524c51f-kube-api-access-lt9r6\") pod \"170a0782-073b-4643-b13a-5b43a524c51f\" (UID: \"170a0782-073b-4643-b13a-5b43a524c51f\") " Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292598 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96e82505-129d-467c-8224-c36229c2da21-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292615 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6czr\" (UniqueName: \"kubernetes.io/projected/96e82505-129d-467c-8224-c36229c2da21-kube-api-access-b6czr\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292664 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run" (OuterVolumeSpecName: "var-run") pod "170a0782-073b-4643-b13a-5b43a524c51f" (UID: "170a0782-073b-4643-b13a-5b43a524c51f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.292656 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "170a0782-073b-4643-b13a-5b43a524c51f" (UID: "170a0782-073b-4643-b13a-5b43a524c51f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.293359 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "170a0782-073b-4643-b13a-5b43a524c51f" (UID: "170a0782-073b-4643-b13a-5b43a524c51f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.293771 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-scripts" (OuterVolumeSpecName: "scripts") pod "170a0782-073b-4643-b13a-5b43a524c51f" (UID: "170a0782-073b-4643-b13a-5b43a524c51f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.293891 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "170a0782-073b-4643-b13a-5b43a524c51f" (UID: "170a0782-073b-4643-b13a-5b43a524c51f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.298229 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170a0782-073b-4643-b13a-5b43a524c51f-kube-api-access-lt9r6" (OuterVolumeSpecName: "kube-api-access-lt9r6") pod "170a0782-073b-4643-b13a-5b43a524c51f" (UID: "170a0782-073b-4643-b13a-5b43a524c51f"). InnerVolumeSpecName "kube-api-access-lt9r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.394074 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt9r6\" (UniqueName: \"kubernetes.io/projected/170a0782-073b-4643-b13a-5b43a524c51f-kube-api-access-lt9r6\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.394116 4992 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.394126 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.394136 4992 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/170a0782-073b-4643-b13a-5b43a524c51f-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.394148 4992 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.394160 4992 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/170a0782-073b-4643-b13a-5b43a524c51f-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.500374 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-djp6h" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.599810 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rsks9"] Dec 11 08:41:49 crc kubenswrapper[4992]: W1211 08:41:49.610667 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf9e9336_fd88_46fa_9a9e_2533e27df0ed.slice/crio-eb0d7731b386961dbcbb141fdf324f8caa725ad8e0dab0992575e83b89630655 WatchSource:0}: Error finding container eb0d7731b386961dbcbb141fdf324f8caa725ad8e0dab0992575e83b89630655: Status 404 returned error can't find the container with id eb0d7731b386961dbcbb141fdf324f8caa725ad8e0dab0992575e83b89630655 Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.743922 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-djp6h-config-4lb2m" event={"ID":"170a0782-073b-4643-b13a-5b43a524c51f","Type":"ContainerDied","Data":"a3e701ce713dd007250748b210b28cc30f5403d561c58604282b988044559316"} Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.743978 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e701ce713dd007250748b210b28cc30f5403d561c58604282b988044559316" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.743945 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-djp6h-config-4lb2m" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.746411 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rsks9" event={"ID":"df9e9336-fd88-46fa-9a9e-2533e27df0ed","Type":"ContainerStarted","Data":"eb0d7731b386961dbcbb141fdf324f8caa725ad8e0dab0992575e83b89630655"} Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.749197 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ssqrt" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.749974 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ssqrt" event={"ID":"96e82505-129d-467c-8224-c36229c2da21","Type":"ContainerDied","Data":"37232d7f8e7807f089109b2a260000b3b5d4cfb6caf2180e82ffc31a711e3e38"} Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.750049 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37232d7f8e7807f089109b2a260000b3b5d4cfb6caf2180e82ffc31a711e3e38" Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.754052 4992 generic.go:334] "Generic (PLEG): container finished" podID="95883dfb-ad1a-4d13-889e-4b9f73ded332" containerID="190765524b1ccbb84f52f0ffa1b930605699ac2e7e3bff018e74f01aedbfc9aa" exitCode=0 Dec 11 08:41:49 crc kubenswrapper[4992]: I1211 08:41:49.754119 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dpvlt" event={"ID":"95883dfb-ad1a-4d13-889e-4b9f73ded332","Type":"ContainerDied","Data":"190765524b1ccbb84f52f0ffa1b930605699ac2e7e3bff018e74f01aedbfc9aa"} Dec 11 08:41:50 crc kubenswrapper[4992]: I1211 08:41:50.332298 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-djp6h-config-4lb2m"] Dec 11 08:41:50 crc kubenswrapper[4992]: I1211 08:41:50.340262 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-djp6h-config-4lb2m"] Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.131131 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.231781 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-swiftconf\") pod \"95883dfb-ad1a-4d13-889e-4b9f73ded332\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.231987 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmvps\" (UniqueName: \"kubernetes.io/projected/95883dfb-ad1a-4d13-889e-4b9f73ded332-kube-api-access-mmvps\") pod \"95883dfb-ad1a-4d13-889e-4b9f73ded332\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.232027 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-combined-ca-bundle\") pod \"95883dfb-ad1a-4d13-889e-4b9f73ded332\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.232095 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95883dfb-ad1a-4d13-889e-4b9f73ded332-etc-swift\") pod \"95883dfb-ad1a-4d13-889e-4b9f73ded332\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.232150 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-scripts\") pod \"95883dfb-ad1a-4d13-889e-4b9f73ded332\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.232200 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-dispersionconf\") pod \"95883dfb-ad1a-4d13-889e-4b9f73ded332\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.232228 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-ring-data-devices\") pod \"95883dfb-ad1a-4d13-889e-4b9f73ded332\" (UID: \"95883dfb-ad1a-4d13-889e-4b9f73ded332\") " Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.233242 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95883dfb-ad1a-4d13-889e-4b9f73ded332-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "95883dfb-ad1a-4d13-889e-4b9f73ded332" (UID: "95883dfb-ad1a-4d13-889e-4b9f73ded332"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.233515 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "95883dfb-ad1a-4d13-889e-4b9f73ded332" (UID: "95883dfb-ad1a-4d13-889e-4b9f73ded332"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.243424 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95883dfb-ad1a-4d13-889e-4b9f73ded332-kube-api-access-mmvps" (OuterVolumeSpecName: "kube-api-access-mmvps") pod "95883dfb-ad1a-4d13-889e-4b9f73ded332" (UID: "95883dfb-ad1a-4d13-889e-4b9f73ded332"). InnerVolumeSpecName "kube-api-access-mmvps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.252524 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "95883dfb-ad1a-4d13-889e-4b9f73ded332" (UID: "95883dfb-ad1a-4d13-889e-4b9f73ded332"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.260054 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95883dfb-ad1a-4d13-889e-4b9f73ded332" (UID: "95883dfb-ad1a-4d13-889e-4b9f73ded332"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.261938 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "95883dfb-ad1a-4d13-889e-4b9f73ded332" (UID: "95883dfb-ad1a-4d13-889e-4b9f73ded332"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.266870 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-scripts" (OuterVolumeSpecName: "scripts") pod "95883dfb-ad1a-4d13-889e-4b9f73ded332" (UID: "95883dfb-ad1a-4d13-889e-4b9f73ded332"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.336236 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.336312 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmvps\" (UniqueName: \"kubernetes.io/projected/95883dfb-ad1a-4d13-889e-4b9f73ded332-kube-api-access-mmvps\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.336326 4992 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/95883dfb-ad1a-4d13-889e-4b9f73ded332-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.336336 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.336346 4992 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.336355 4992 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/95883dfb-ad1a-4d13-889e-4b9f73ded332-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.336363 4992 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/95883dfb-ad1a-4d13-889e-4b9f73ded332-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.776038 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dpvlt" event={"ID":"95883dfb-ad1a-4d13-889e-4b9f73ded332","Type":"ContainerDied","Data":"46670349d8c626264da22476d63a94db98fae0ee99896cf462a04b7f43c9dd40"} Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.776091 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46670349d8c626264da22476d63a94db98fae0ee99896cf462a04b7f43c9dd40" Dec 11 08:41:51 crc kubenswrapper[4992]: I1211 08:41:51.776180 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dpvlt" Dec 11 08:41:52 crc kubenswrapper[4992]: I1211 08:41:52.108431 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170a0782-073b-4643-b13a-5b43a524c51f" path="/var/lib/kubelet/pods/170a0782-073b-4643-b13a-5b43a524c51f/volumes" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.986739 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-fx6rx"] Dec 11 08:41:53 crc kubenswrapper[4992]: E1211 08:41:53.987218 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170a0782-073b-4643-b13a-5b43a524c51f" containerName="ovn-config" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.987237 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="170a0782-073b-4643-b13a-5b43a524c51f" containerName="ovn-config" Dec 11 08:41:53 crc kubenswrapper[4992]: E1211 08:41:53.987257 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e82505-129d-467c-8224-c36229c2da21" containerName="mariadb-database-create" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.987268 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e82505-129d-467c-8224-c36229c2da21" containerName="mariadb-database-create" Dec 11 08:41:53 crc kubenswrapper[4992]: E1211 08:41:53.987288 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95883dfb-ad1a-4d13-889e-4b9f73ded332" containerName="swift-ring-rebalance" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.987297 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="95883dfb-ad1a-4d13-889e-4b9f73ded332" containerName="swift-ring-rebalance" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.987490 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="95883dfb-ad1a-4d13-889e-4b9f73ded332" containerName="swift-ring-rebalance" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.987511 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e82505-129d-467c-8224-c36229c2da21" containerName="mariadb-database-create" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.987533 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="170a0782-073b-4643-b13a-5b43a524c51f" containerName="ovn-config" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.988233 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.991584 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pt75q" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.992485 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 11 08:41:53 crc kubenswrapper[4992]: I1211 08:41:53.995385 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fx6rx"] Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.086277 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-db-sync-config-data\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.086688 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-combined-ca-bundle\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.086728 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-config-data\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.086913 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2d95\" (UniqueName: \"kubernetes.io/projected/19dbc853-9df9-491e-af8d-6c13547cd478-kube-api-access-d2d95\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.188430 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2d95\" (UniqueName: \"kubernetes.io/projected/19dbc853-9df9-491e-af8d-6c13547cd478-kube-api-access-d2d95\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.188672 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-db-sync-config-data\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.188701 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-combined-ca-bundle\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.188731 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-config-data\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.195047 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-db-sync-config-data\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.195944 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-combined-ca-bundle\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.196172 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-config-data\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.209301 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2d95\" (UniqueName: \"kubernetes.io/projected/19dbc853-9df9-491e-af8d-6c13547cd478-kube-api-access-d2d95\") pod \"glance-db-sync-fx6rx\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.306250 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fx6rx" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.899075 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:54 crc kubenswrapper[4992]: I1211 08:41:54.904603 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/47851b57-2a65-4a8a-b2a2-f01a5a2d7833-etc-swift\") pod \"swift-storage-0\" (UID: \"47851b57-2a65-4a8a-b2a2-f01a5a2d7833\") " pod="openstack/swift-storage-0" Dec 11 08:41:55 crc kubenswrapper[4992]: I1211 08:41:55.144735 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 08:42:03 crc kubenswrapper[4992]: E1211 08:42:03.982799 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Dec 11 08:42:03 crc kubenswrapper[4992]: E1211 08:42:03.983657 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xdkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-rsks9_openstack(df9e9336-fd88-46fa-9a9e-2533e27df0ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:42:03 crc kubenswrapper[4992]: E1211 08:42:03.985047 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-rsks9" podUID="df9e9336-fd88-46fa-9a9e-2533e27df0ed" Dec 11 08:42:04 crc kubenswrapper[4992]: I1211 08:42:04.239765 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 08:42:04 crc kubenswrapper[4992]: W1211 08:42:04.243938 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47851b57_2a65_4a8a_b2a2_f01a5a2d7833.slice/crio-74bc5f8b145b0fe12d5f7fa83d043e452214423e18b07bf4e04ca206683bbcc8 WatchSource:0}: Error finding container 74bc5f8b145b0fe12d5f7fa83d043e452214423e18b07bf4e04ca206683bbcc8: Status 404 returned error can't find the container with id 74bc5f8b145b0fe12d5f7fa83d043e452214423e18b07bf4e04ca206683bbcc8 Dec 11 08:42:04 crc kubenswrapper[4992]: I1211 08:42:04.914919 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"74bc5f8b145b0fe12d5f7fa83d043e452214423e18b07bf4e04ca206683bbcc8"} Dec 11 08:42:04 crc kubenswrapper[4992]: E1211 08:42:04.916741 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-rsks9" podUID="df9e9336-fd88-46fa-9a9e-2533e27df0ed" Dec 11 08:42:05 crc kubenswrapper[4992]: I1211 08:42:05.049684 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fx6rx"] Dec 11 08:42:05 crc kubenswrapper[4992]: W1211 08:42:05.053093 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19dbc853_9df9_491e_af8d_6c13547cd478.slice/crio-9f150d41b8b5d5b6cadb3c0c5cf3a09bf297a14a72cd8bfd7647da52a6c580af WatchSource:0}: Error finding container 9f150d41b8b5d5b6cadb3c0c5cf3a09bf297a14a72cd8bfd7647da52a6c580af: Status 404 returned error can't find the container with id 9f150d41b8b5d5b6cadb3c0c5cf3a09bf297a14a72cd8bfd7647da52a6c580af Dec 11 08:42:05 crc kubenswrapper[4992]: I1211 08:42:05.380000 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:42:05 crc kubenswrapper[4992]: I1211 08:42:05.380071 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:42:05 crc kubenswrapper[4992]: I1211 08:42:05.927790 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fx6rx" event={"ID":"19dbc853-9df9-491e-af8d-6c13547cd478","Type":"ContainerStarted","Data":"9f150d41b8b5d5b6cadb3c0c5cf3a09bf297a14a72cd8bfd7647da52a6c580af"} Dec 11 08:42:09 crc kubenswrapper[4992]: I1211 08:42:09.965199 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"cafa711ce52902ed9fbdcf4be37acd50308ebf6cb83594f392491c003ef32bf3"} Dec 11 08:42:09 crc kubenswrapper[4992]: I1211 08:42:09.965652 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"4aa599528faf46135b6e616a02586e4ccbefe57a47bf341bbb96ec62b36ae913"} Dec 11 08:42:11 crc kubenswrapper[4992]: I1211 08:42:11.982902 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"40dcb09dba2cc26d565ace9e0b72d1c453ab572582bab3e5eda8cb67e9aa2d4b"} Dec 11 08:42:14 crc kubenswrapper[4992]: I1211 08:42:14.005812 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"aaba65f8e4cc8dd018d5517a44ee2313fa9a5c103db3e094091672bb94f617e7"} Dec 11 08:42:21 crc kubenswrapper[4992]: I1211 08:42:21.080371 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fx6rx" event={"ID":"19dbc853-9df9-491e-af8d-6c13547cd478","Type":"ContainerStarted","Data":"bec3e547db1d2714df96d3b7f8490f9777a225f5620875807246aab96a61b628"} Dec 11 08:42:21 crc kubenswrapper[4992]: I1211 08:42:21.085469 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rsks9" event={"ID":"df9e9336-fd88-46fa-9a9e-2533e27df0ed","Type":"ContainerStarted","Data":"1f2495e5635b1c355572f3076f2c0d2d2b6b241a128664c1306dc817335d446c"} Dec 11 08:42:21 crc kubenswrapper[4992]: I1211 08:42:21.100692 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-fx6rx" podStartSLOduration=12.781210176 podStartE2EDuration="28.100671988s" podCreationTimestamp="2025-12-11 08:41:53 +0000 UTC" firstStartedPulling="2025-12-11 08:42:05.055882687 +0000 UTC m=+1149.315356613" lastFinishedPulling="2025-12-11 08:42:20.375344499 +0000 UTC m=+1164.634818425" observedRunningTime="2025-12-11 08:42:21.100189977 +0000 UTC m=+1165.359663913" watchObservedRunningTime="2025-12-11 08:42:21.100671988 +0000 UTC m=+1165.360145914" Dec 11 08:42:21 crc kubenswrapper[4992]: I1211 08:42:21.124598 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rsks9" podStartSLOduration=2.41324632 podStartE2EDuration="33.124579094s" podCreationTimestamp="2025-12-11 08:41:48 +0000 UTC" firstStartedPulling="2025-12-11 08:41:49.613958659 +0000 UTC m=+1133.873432595" lastFinishedPulling="2025-12-11 08:42:20.325291443 +0000 UTC m=+1164.584765369" observedRunningTime="2025-12-11 08:42:21.12068764 +0000 UTC m=+1165.380161576" watchObservedRunningTime="2025-12-11 08:42:21.124579094 +0000 UTC m=+1165.384053020" Dec 11 08:42:22 crc kubenswrapper[4992]: I1211 08:42:22.116013 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"4b90e004734f86dd6473127c726e42d8b841f03d20d14887f455717f1d2aef02"} Dec 11 08:42:22 crc kubenswrapper[4992]: I1211 08:42:22.116443 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"2d7ad8cb9302df39b99e0e46645946ae770f22cb435154ca8ca456cf385e9d00"} Dec 11 08:42:22 crc kubenswrapper[4992]: I1211 08:42:22.116459 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"1930b6c9b50e511b0302885b2b4120334dc137d66c72b15b0c10252c438df8bd"} Dec 11 08:42:22 crc kubenswrapper[4992]: I1211 08:42:22.116473 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"1c09d0ca5836c6708ac94371f01afc3d9b5c118337abe043eda04ee6c387c563"} Dec 11 08:42:24 crc kubenswrapper[4992]: I1211 08:42:24.129094 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"7be03f6d7b6b78c79ec6bac688c6be118e9efe49cca74f8a8453152ed75f3ef8"} Dec 11 08:42:25 crc kubenswrapper[4992]: I1211 08:42:25.141123 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"eba6785c63ed6d35d59611a23c1c64a9617f5304972562dcbbf20cccb2faf40e"} Dec 11 08:42:25 crc kubenswrapper[4992]: I1211 08:42:25.141456 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"0693a44a1915717f18facca82c044f5c1382d1c8f8352541c5a553a2053bbb24"} Dec 11 08:42:26 crc kubenswrapper[4992]: I1211 08:42:26.166542 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"dfe5f1d2bdaaf24bfe455832c093d20efd102c40adc7172040d54694e155f03e"} Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.186423 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"a4736adae5a58f6077c8f0cf59b1f5159bb7c3c07a506577e611d82b6a13e49d"} Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.186847 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"735b906248f1868fe1d9f73f0a65c782948dba961903348c338b2308cb5510c9"} Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.186864 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"47851b57-2a65-4a8a-b2a2-f01a5a2d7833","Type":"ContainerStarted","Data":"fa3a14f449d35f7c38aec30f41f77b657d4ccfb40e536ee09e28b799d0e5ee11"} Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.229728 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=47.689596944 podStartE2EDuration="1m7.229709522s" podCreationTimestamp="2025-12-11 08:41:21 +0000 UTC" firstStartedPulling="2025-12-11 08:42:04.246929448 +0000 UTC m=+1148.506403364" lastFinishedPulling="2025-12-11 08:42:23.787042026 +0000 UTC m=+1168.046515942" observedRunningTime="2025-12-11 08:42:28.222114537 +0000 UTC m=+1172.481588463" watchObservedRunningTime="2025-12-11 08:42:28.229709522 +0000 UTC m=+1172.489183448" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.491302 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dmm6w"] Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.492602 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.494582 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.506690 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dmm6w"] Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.602114 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfcjc\" (UniqueName: \"kubernetes.io/projected/2d51bc1e-599b-4281-9c4b-4df4350d79d8-kube-api-access-tfcjc\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.602228 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.602271 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-config\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.602314 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.602331 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.602350 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.704190 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.704603 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.704649 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.704678 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfcjc\" (UniqueName: \"kubernetes.io/projected/2d51bc1e-599b-4281-9c4b-4df4350d79d8-kube-api-access-tfcjc\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.704746 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.704787 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-config\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.705377 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.705596 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.706166 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-config\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.706324 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.706349 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.724770 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfcjc\" (UniqueName: \"kubernetes.io/projected/2d51bc1e-599b-4281-9c4b-4df4350d79d8-kube-api-access-tfcjc\") pod \"dnsmasq-dns-77585f5f8c-dmm6w\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:28 crc kubenswrapper[4992]: I1211 08:42:28.833990 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:29 crc kubenswrapper[4992]: I1211 08:42:29.074028 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dmm6w"] Dec 11 08:42:29 crc kubenswrapper[4992]: I1211 08:42:29.207748 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" event={"ID":"2d51bc1e-599b-4281-9c4b-4df4350d79d8","Type":"ContainerStarted","Data":"72547a12e343c37ad2283d423eb034a211b27c6bb0328098adc67439aa200082"} Dec 11 08:42:30 crc kubenswrapper[4992]: I1211 08:42:30.218819 4992 generic.go:334] "Generic (PLEG): container finished" podID="2d51bc1e-599b-4281-9c4b-4df4350d79d8" containerID="b5e2da316d76fce14d69bddfcff10d3725bd92926b33c0fe4b2fedd25caf16a8" exitCode=0 Dec 11 08:42:30 crc kubenswrapper[4992]: I1211 08:42:30.218868 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" event={"ID":"2d51bc1e-599b-4281-9c4b-4df4350d79d8","Type":"ContainerDied","Data":"b5e2da316d76fce14d69bddfcff10d3725bd92926b33c0fe4b2fedd25caf16a8"} Dec 11 08:42:31 crc kubenswrapper[4992]: I1211 08:42:31.228849 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" event={"ID":"2d51bc1e-599b-4281-9c4b-4df4350d79d8","Type":"ContainerStarted","Data":"a38bcdacdf0d72510d71a84d62142ddd7a186c711d77eccdef7f7ae831251631"} Dec 11 08:42:31 crc kubenswrapper[4992]: I1211 08:42:31.229252 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:31 crc kubenswrapper[4992]: I1211 08:42:31.230168 4992 generic.go:334] "Generic (PLEG): container finished" podID="df9e9336-fd88-46fa-9a9e-2533e27df0ed" containerID="1f2495e5635b1c355572f3076f2c0d2d2b6b241a128664c1306dc817335d446c" exitCode=0 Dec 11 08:42:31 crc kubenswrapper[4992]: I1211 08:42:31.230206 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rsks9" event={"ID":"df9e9336-fd88-46fa-9a9e-2533e27df0ed","Type":"ContainerDied","Data":"1f2495e5635b1c355572f3076f2c0d2d2b6b241a128664c1306dc817335d446c"} Dec 11 08:42:31 crc kubenswrapper[4992]: I1211 08:42:31.251900 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" podStartSLOduration=3.251880061 podStartE2EDuration="3.251880061s" podCreationTimestamp="2025-12-11 08:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:42:31.244566801 +0000 UTC m=+1175.504040727" watchObservedRunningTime="2025-12-11 08:42:31.251880061 +0000 UTC m=+1175.511353997" Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.546555 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rsks9" Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.681221 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-combined-ca-bundle\") pod \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.681318 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xdkx\" (UniqueName: \"kubernetes.io/projected/df9e9336-fd88-46fa-9a9e-2533e27df0ed-kube-api-access-5xdkx\") pod \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.681435 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-config-data\") pod \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\" (UID: \"df9e9336-fd88-46fa-9a9e-2533e27df0ed\") " Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.686822 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df9e9336-fd88-46fa-9a9e-2533e27df0ed-kube-api-access-5xdkx" (OuterVolumeSpecName: "kube-api-access-5xdkx") pod "df9e9336-fd88-46fa-9a9e-2533e27df0ed" (UID: "df9e9336-fd88-46fa-9a9e-2533e27df0ed"). InnerVolumeSpecName "kube-api-access-5xdkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.706941 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df9e9336-fd88-46fa-9a9e-2533e27df0ed" (UID: "df9e9336-fd88-46fa-9a9e-2533e27df0ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.729970 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-config-data" (OuterVolumeSpecName: "config-data") pod "df9e9336-fd88-46fa-9a9e-2533e27df0ed" (UID: "df9e9336-fd88-46fa-9a9e-2533e27df0ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.783653 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.783686 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xdkx\" (UniqueName: \"kubernetes.io/projected/df9e9336-fd88-46fa-9a9e-2533e27df0ed-kube-api-access-5xdkx\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:32 crc kubenswrapper[4992]: I1211 08:42:32.783697 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df9e9336-fd88-46fa-9a9e-2533e27df0ed-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.246392 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rsks9" event={"ID":"df9e9336-fd88-46fa-9a9e-2533e27df0ed","Type":"ContainerDied","Data":"eb0d7731b386961dbcbb141fdf324f8caa725ad8e0dab0992575e83b89630655"} Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.246436 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb0d7731b386961dbcbb141fdf324f8caa725ad8e0dab0992575e83b89630655" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.246432 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rsks9" Dec 11 08:42:33 crc kubenswrapper[4992]: E1211 08:42:33.434511 4992 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf9e9336_fd88_46fa_9a9e_2533e27df0ed.slice\": RecentStats: unable to find data in memory cache]" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.540689 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-g2rsp"] Dec 11 08:42:33 crc kubenswrapper[4992]: E1211 08:42:33.541159 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df9e9336-fd88-46fa-9a9e-2533e27df0ed" containerName="keystone-db-sync" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.541184 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="df9e9336-fd88-46fa-9a9e-2533e27df0ed" containerName="keystone-db-sync" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.541383 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="df9e9336-fd88-46fa-9a9e-2533e27df0ed" containerName="keystone-db-sync" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.542181 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.545597 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.545867 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.547275 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xs4g8" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.548862 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.549107 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.550140 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g2rsp"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.559113 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dmm6w"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.559361 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" podUID="2d51bc1e-599b-4281-9c4b-4df4350d79d8" containerName="dnsmasq-dns" containerID="cri-o://a38bcdacdf0d72510d71a84d62142ddd7a186c711d77eccdef7f7ae831251631" gracePeriod=10 Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.606271 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-w5tjx"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.617916 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.631948 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-w5tjx"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702297 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-scripts\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702360 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702395 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702422 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-svc\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702450 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-credential-keys\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702511 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6hc7\" (UniqueName: \"kubernetes.io/projected/f350b606-93b2-485b-85cd-a705acffd3e1-kube-api-access-t6hc7\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702569 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-combined-ca-bundle\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702586 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6npfz\" (UniqueName: \"kubernetes.io/projected/324347da-2503-407c-8a8b-49ded124f2a4-kube-api-access-6npfz\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702619 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-fernet-keys\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702650 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-config\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702673 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-config-data\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.702694 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.710789 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f6c5dcd45-pxlgq"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.716309 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.728793 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.728804 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-kxpwj" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.729025 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.729562 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.734177 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f6c5dcd45-pxlgq"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.796715 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-8mmcr"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.797834 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.801929 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.802179 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p8krz" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.802346 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.803925 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-combined-ca-bundle\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.803966 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6npfz\" (UniqueName: \"kubernetes.io/projected/324347da-2503-407c-8a8b-49ded124f2a4-kube-api-access-6npfz\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804012 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-fernet-keys\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804037 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-config\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804068 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-config-data\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804097 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804138 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-scripts\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804175 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804207 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804240 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-scripts\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804264 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brlsc\" (UniqueName: \"kubernetes.io/projected/98801adb-78c9-422e-a24b-1e8082db71f7-kube-api-access-brlsc\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804289 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-svc\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804313 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98801adb-78c9-422e-a24b-1e8082db71f7-horizon-secret-key\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804342 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98801adb-78c9-422e-a24b-1e8082db71f7-logs\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804365 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-credential-keys\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804392 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-config-data\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.804443 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6hc7\" (UniqueName: \"kubernetes.io/projected/f350b606-93b2-485b-85cd-a705acffd3e1-kube-api-access-t6hc7\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.806751 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.806829 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-svc\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.806899 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.806970 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.807505 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-config\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.834133 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-fernet-keys\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.841613 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-credential-keys\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.847264 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-combined-ca-bundle\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.847840 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-scripts\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.849171 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6hc7\" (UniqueName: \"kubernetes.io/projected/f350b606-93b2-485b-85cd-a705acffd3e1-kube-api-access-t6hc7\") pod \"dnsmasq-dns-55fff446b9-w5tjx\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.851797 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8mmcr"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.863323 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-config-data\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.887340 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6npfz\" (UniqueName: \"kubernetes.io/projected/324347da-2503-407c-8a8b-49ded124f2a4-kube-api-access-6npfz\") pod \"keystone-bootstrap-g2rsp\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.889477 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.902916 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b5bd6fd5-lzb8x"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.904310 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.907782 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-db-sync-config-data\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.907859 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-scripts\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.907903 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-combined-ca-bundle\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.907956 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-config-data\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.907996 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-scripts\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.908023 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpt6c\" (UniqueName: \"kubernetes.io/projected/73c99101-825a-4a3b-acf0-7fc522f3631f-kube-api-access-gpt6c\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.908051 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brlsc\" (UniqueName: \"kubernetes.io/projected/98801adb-78c9-422e-a24b-1e8082db71f7-kube-api-access-brlsc\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.908079 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c99101-825a-4a3b-acf0-7fc522f3631f-etc-machine-id\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.908101 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98801adb-78c9-422e-a24b-1e8082db71f7-horizon-secret-key\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.908129 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98801adb-78c9-422e-a24b-1e8082db71f7-logs\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.908159 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-config-data\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.908992 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-scripts\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.910050 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98801adb-78c9-422e-a24b-1e8082db71f7-logs\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.917931 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-config-data\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.928480 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-v95pn"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.929589 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.932947 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98801adb-78c9-422e-a24b-1e8082db71f7-horizon-secret-key\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.938248 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.946151 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brlsc\" (UniqueName: \"kubernetes.io/projected/98801adb-78c9-422e-a24b-1e8082db71f7-kube-api-access-brlsc\") pod \"horizon-5f6c5dcd45-pxlgq\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.946464 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cqxff" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.950174 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.964866 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.965185 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.966190 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.977513 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.982792 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:42:33 crc kubenswrapper[4992]: I1211 08:42:33.995307 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5bd6fd5-lzb8x"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012003 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-scripts\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012052 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwb6x\" (UniqueName: \"kubernetes.io/projected/06a1cf3a-2219-4683-8fab-10bee631255d-kube-api-access-hwb6x\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012087 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-config-data\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012149 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-config-data\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012177 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-scripts\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012208 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-db-sync-config-data\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012257 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-scripts\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012294 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-combined-ca-bundle\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012320 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c8bl\" (UniqueName: \"kubernetes.io/projected/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-kube-api-access-4c8bl\") pod \"barbican-db-sync-v95pn\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012345 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012390 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/06a1cf3a-2219-4683-8fab-10bee631255d-horizon-secret-key\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012414 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-combined-ca-bundle\") pod \"barbican-db-sync-v95pn\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012436 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-config-data\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012462 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012486 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a1cf3a-2219-4683-8fab-10bee631255d-logs\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012515 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpt6c\" (UniqueName: \"kubernetes.io/projected/73c99101-825a-4a3b-acf0-7fc522f3631f-kube-api-access-gpt6c\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012538 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qv57\" (UniqueName: \"kubernetes.io/projected/65182c76-fea3-4f83-b03f-bfce48989e82-kube-api-access-7qv57\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012564 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c99101-825a-4a3b-acf0-7fc522f3631f-etc-machine-id\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012592 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-db-sync-config-data\") pod \"barbican-db-sync-v95pn\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012619 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-run-httpd\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.012654 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-log-httpd\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.015255 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c99101-825a-4a3b-acf0-7fc522f3631f-etc-machine-id\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.024353 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-combined-ca-bundle\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.024681 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-scripts\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.038761 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-config-data\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.045854 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-db-sync-config-data\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.070886 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v95pn"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.072868 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpt6c\" (UniqueName: \"kubernetes.io/projected/73c99101-825a-4a3b-acf0-7fc522f3631f-kube-api-access-gpt6c\") pod \"cinder-db-sync-8mmcr\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113647 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-config-data\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113705 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-scripts\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113785 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c8bl\" (UniqueName: \"kubernetes.io/projected/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-kube-api-access-4c8bl\") pod \"barbican-db-sync-v95pn\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113807 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113839 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/06a1cf3a-2219-4683-8fab-10bee631255d-horizon-secret-key\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113854 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-combined-ca-bundle\") pod \"barbican-db-sync-v95pn\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113883 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113902 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a1cf3a-2219-4683-8fab-10bee631255d-logs\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113933 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qv57\" (UniqueName: \"kubernetes.io/projected/65182c76-fea3-4f83-b03f-bfce48989e82-kube-api-access-7qv57\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113971 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-db-sync-config-data\") pod \"barbican-db-sync-v95pn\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.113998 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-run-httpd\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.114015 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-log-httpd\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.114043 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-scripts\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.114060 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwb6x\" (UniqueName: \"kubernetes.io/projected/06a1cf3a-2219-4683-8fab-10bee631255d-kube-api-access-hwb6x\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.114079 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-config-data\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.115262 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-config-data\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.137834 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-scripts\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.139594 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-log-httpd\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.140360 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-run-httpd\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.141192 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a1cf3a-2219-4683-8fab-10bee631255d-logs\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.166149 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/06a1cf3a-2219-4683-8fab-10bee631255d-horizon-secret-key\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.179093 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-config-data\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.181667 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-scripts\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.182123 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.187914 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.189921 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qv57\" (UniqueName: \"kubernetes.io/projected/65182c76-fea3-4f83-b03f-bfce48989e82-kube-api-access-7qv57\") pod \"ceilometer-0\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.194952 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c8bl\" (UniqueName: \"kubernetes.io/projected/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-kube-api-access-4c8bl\") pod \"barbican-db-sync-v95pn\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.219103 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.239278 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-db-sync-config-data\") pod \"barbican-db-sync-v95pn\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.253118 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-combined-ca-bundle\") pod \"barbican-db-sync-v95pn\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.259130 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.284884 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwb6x\" (UniqueName: \"kubernetes.io/projected/06a1cf3a-2219-4683-8fab-10bee631255d-kube-api-access-hwb6x\") pod \"horizon-5b5bd6fd5-lzb8x\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.322184 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-w5tjx"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.335607 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.341918 4992 generic.go:334] "Generic (PLEG): container finished" podID="2d51bc1e-599b-4281-9c4b-4df4350d79d8" containerID="a38bcdacdf0d72510d71a84d62142ddd7a186c711d77eccdef7f7ae831251631" exitCode=0 Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.341968 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" event={"ID":"2d51bc1e-599b-4281-9c4b-4df4350d79d8","Type":"ContainerDied","Data":"a38bcdacdf0d72510d71a84d62142ddd7a186c711d77eccdef7f7ae831251631"} Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.360921 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-thwnm"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.362014 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.367092 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pvh9n" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.367276 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.367367 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-thwnm"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.367483 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.413293 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.449770 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-tcqkm"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.450118 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v95pn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.450916 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.455467 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2xdtn" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.456231 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.456435 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.465733 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tcqkm"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.479093 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-rbwgf"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.480714 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.504729 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-rbwgf"] Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.531782 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-config\") pod \"neutron-db-sync-thwnm\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.531831 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qj9w\" (UniqueName: \"kubernetes.io/projected/429dae0d-117e-4943-966e-11460f9676b7-kube-api-access-9qj9w\") pod \"neutron-db-sync-thwnm\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.531858 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-combined-ca-bundle\") pod \"neutron-db-sync-thwnm\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.633322 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.633380 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-config\") pod \"neutron-db-sync-thwnm\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.633406 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8231c59-b8e7-4f7d-aeb0-888d579425ac-logs\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.633430 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-combined-ca-bundle\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.633452 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qj9w\" (UniqueName: \"kubernetes.io/projected/429dae0d-117e-4943-966e-11460f9676b7-kube-api-access-9qj9w\") pod \"neutron-db-sync-thwnm\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.633473 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.635993 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-combined-ca-bundle\") pod \"neutron-db-sync-thwnm\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.636053 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkt9g\" (UniqueName: \"kubernetes.io/projected/f8231c59-b8e7-4f7d-aeb0-888d579425ac-kube-api-access-hkt9g\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.636098 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-scripts\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.636122 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.636169 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pgz9\" (UniqueName: \"kubernetes.io/projected/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-kube-api-access-8pgz9\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.636183 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-config\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.636222 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.636250 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-config-data\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.641235 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-combined-ca-bundle\") pod \"neutron-db-sync-thwnm\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.647782 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-config\") pod \"neutron-db-sync-thwnm\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.657208 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qj9w\" (UniqueName: \"kubernetes.io/projected/429dae0d-117e-4943-966e-11460f9676b7-kube-api-access-9qj9w\") pod \"neutron-db-sync-thwnm\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.729251 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-thwnm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738378 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738456 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkt9g\" (UniqueName: \"kubernetes.io/projected/f8231c59-b8e7-4f7d-aeb0-888d579425ac-kube-api-access-hkt9g\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738496 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-scripts\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738542 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738578 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pgz9\" (UniqueName: \"kubernetes.io/projected/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-kube-api-access-8pgz9\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738595 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-config\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738652 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738689 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-config-data\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738735 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738766 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8231c59-b8e7-4f7d-aeb0-888d579425ac-logs\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.738790 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-combined-ca-bundle\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.740743 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.740806 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.741471 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.741507 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-config\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.741847 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8231c59-b8e7-4f7d-aeb0-888d579425ac-logs\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.742841 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.744357 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-config-data\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.745578 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-scripts\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.746279 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-combined-ca-bundle\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.762063 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkt9g\" (UniqueName: \"kubernetes.io/projected/f8231c59-b8e7-4f7d-aeb0-888d579425ac-kube-api-access-hkt9g\") pod \"placement-db-sync-tcqkm\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.765026 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pgz9\" (UniqueName: \"kubernetes.io/projected/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-kube-api-access-8pgz9\") pod \"dnsmasq-dns-76fcf4b695-rbwgf\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.789963 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tcqkm" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.818090 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.911459 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-g2rsp"] Dec 11 08:42:34 crc kubenswrapper[4992]: W1211 08:42:34.917717 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod324347da_2503_407c_8a8b_49ded124f2a4.slice/crio-6772c1140222b463a9df57ae9ee652abdf935c08d1388949ea2eb5ee0e76204a WatchSource:0}: Error finding container 6772c1140222b463a9df57ae9ee652abdf935c08d1388949ea2eb5ee0e76204a: Status 404 returned error can't find the container with id 6772c1140222b463a9df57ae9ee652abdf935c08d1388949ea2eb5ee0e76204a Dec 11 08:42:34 crc kubenswrapper[4992]: I1211 08:42:34.969678 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-w5tjx"] Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.004156 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.005001 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f6c5dcd45-pxlgq"] Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.104419 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.115880 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-8mmcr"] Dec 11 08:42:35 crc kubenswrapper[4992]: W1211 08:42:35.118218 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65182c76_fea3_4f83_b03f_bfce48989e82.slice/crio-06f547eef2b98629473d5e712053bcc2aeaaf58a6ef098fde21f051966e188e7 WatchSource:0}: Error finding container 06f547eef2b98629473d5e712053bcc2aeaaf58a6ef098fde21f051966e188e7: Status 404 returned error can't find the container with id 06f547eef2b98629473d5e712053bcc2aeaaf58a6ef098fde21f051966e188e7 Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.152213 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-swift-storage-0\") pod \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.152661 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-sb\") pod \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.152775 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-svc\") pod \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.152875 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfcjc\" (UniqueName: \"kubernetes.io/projected/2d51bc1e-599b-4281-9c4b-4df4350d79d8-kube-api-access-tfcjc\") pod \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.152944 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-config\") pod \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.152998 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-nb\") pod \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\" (UID: \"2d51bc1e-599b-4281-9c4b-4df4350d79d8\") " Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.162242 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d51bc1e-599b-4281-9c4b-4df4350d79d8-kube-api-access-tfcjc" (OuterVolumeSpecName: "kube-api-access-tfcjc") pod "2d51bc1e-599b-4281-9c4b-4df4350d79d8" (UID: "2d51bc1e-599b-4281-9c4b-4df4350d79d8"). InnerVolumeSpecName "kube-api-access-tfcjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.255989 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfcjc\" (UniqueName: \"kubernetes.io/projected/2d51bc1e-599b-4281-9c4b-4df4350d79d8-kube-api-access-tfcjc\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.258159 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d51bc1e-599b-4281-9c4b-4df4350d79d8" (UID: "2d51bc1e-599b-4281-9c4b-4df4350d79d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.309901 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-v95pn"] Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.317347 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d51bc1e-599b-4281-9c4b-4df4350d79d8" (UID: "2d51bc1e-599b-4281-9c4b-4df4350d79d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.326904 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5bd6fd5-lzb8x"] Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.349973 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d51bc1e-599b-4281-9c4b-4df4350d79d8" (UID: "2d51bc1e-599b-4281-9c4b-4df4350d79d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.360518 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.360547 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.360561 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.364672 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" event={"ID":"2d51bc1e-599b-4281-9c4b-4df4350d79d8","Type":"ContainerDied","Data":"72547a12e343c37ad2283d423eb034a211b27c6bb0328098adc67439aa200082"} Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.364803 4992 scope.go:117] "RemoveContainer" containerID="a38bcdacdf0d72510d71a84d62142ddd7a186c711d77eccdef7f7ae831251631" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.364667 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-dmm6w" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.369157 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-config" (OuterVolumeSpecName: "config") pod "2d51bc1e-599b-4281-9c4b-4df4350d79d8" (UID: "2d51bc1e-599b-4281-9c4b-4df4350d79d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.369317 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g2rsp" event={"ID":"324347da-2503-407c-8a8b-49ded124f2a4","Type":"ContainerStarted","Data":"7cb784d206c5ffadbdbc364d7da11f757d984a6faece0f465dc5454e5623b3d7"} Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.369354 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g2rsp" event={"ID":"324347da-2503-407c-8a8b-49ded124f2a4","Type":"ContainerStarted","Data":"6772c1140222b463a9df57ae9ee652abdf935c08d1388949ea2eb5ee0e76204a"} Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.370281 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d51bc1e-599b-4281-9c4b-4df4350d79d8" (UID: "2d51bc1e-599b-4281-9c4b-4df4350d79d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.371501 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v95pn" event={"ID":"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b","Type":"ContainerStarted","Data":"11dd8f3e8c55dc9a5359786a2744d6b901deaed55394543c3066ebe555da5fb9"} Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.374053 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65182c76-fea3-4f83-b03f-bfce48989e82","Type":"ContainerStarted","Data":"06f547eef2b98629473d5e712053bcc2aeaaf58a6ef098fde21f051966e188e7"} Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.377385 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" event={"ID":"f350b606-93b2-485b-85cd-a705acffd3e1","Type":"ContainerStarted","Data":"7d909e0e9200ab8c438f528d7349123ca8d5f9435977883f3f4f0260333a70dd"} Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.379016 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.379056 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.380476 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8mmcr" event={"ID":"73c99101-825a-4a3b-acf0-7fc522f3631f","Type":"ContainerStarted","Data":"4b4e44e36bedd6e6c89be54ccee98c7ac8796a1d74267d953383fa30fd68de0f"} Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.382004 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6c5dcd45-pxlgq" event={"ID":"98801adb-78c9-422e-a24b-1e8082db71f7","Type":"ContainerStarted","Data":"b9529e774eda2d4f341c70bcc0374812920e6e8d72fd11db450159047df905db"} Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.396930 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-g2rsp" podStartSLOduration=2.396910812 podStartE2EDuration="2.396910812s" podCreationTimestamp="2025-12-11 08:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:42:35.387615454 +0000 UTC m=+1179.647089380" watchObservedRunningTime="2025-12-11 08:42:35.396910812 +0000 UTC m=+1179.656384748" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.409828 4992 scope.go:117] "RemoveContainer" containerID="b5e2da316d76fce14d69bddfcff10d3725bd92926b33c0fe4b2fedd25caf16a8" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.464274 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.464325 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d51bc1e-599b-4281-9c4b-4df4350d79d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.521321 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-thwnm"] Dec 11 08:42:35 crc kubenswrapper[4992]: W1211 08:42:35.534868 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod429dae0d_117e_4943_966e_11460f9676b7.slice/crio-a95ff77a87e9826a646deb8b48ae2ab75d20e2101e2b691b22a70cbd82c46c5c WatchSource:0}: Error finding container a95ff77a87e9826a646deb8b48ae2ab75d20e2101e2b691b22a70cbd82c46c5c: Status 404 returned error can't find the container with id a95ff77a87e9826a646deb8b48ae2ab75d20e2101e2b691b22a70cbd82c46c5c Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.539573 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-rbwgf"] Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.551938 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-tcqkm"] Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.812878 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dmm6w"] Dec 11 08:42:35 crc kubenswrapper[4992]: I1211 08:42:35.830990 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dmm6w"] Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.114970 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d51bc1e-599b-4281-9c4b-4df4350d79d8" path="/var/lib/kubelet/pods/2d51bc1e-599b-4281-9c4b-4df4350d79d8/volumes" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.375440 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5bd6fd5-lzb8x"] Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.384130 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86697cc57f-6xflj"] Dec 11 08:42:36 crc kubenswrapper[4992]: E1211 08:42:36.384557 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d51bc1e-599b-4281-9c4b-4df4350d79d8" containerName="init" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.384581 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d51bc1e-599b-4281-9c4b-4df4350d79d8" containerName="init" Dec 11 08:42:36 crc kubenswrapper[4992]: E1211 08:42:36.384611 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d51bc1e-599b-4281-9c4b-4df4350d79d8" containerName="dnsmasq-dns" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.384617 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d51bc1e-599b-4281-9c4b-4df4350d79d8" containerName="dnsmasq-dns" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.384857 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d51bc1e-599b-4281-9c4b-4df4350d79d8" containerName="dnsmasq-dns" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.387256 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.407966 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86697cc57f-6xflj"] Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.458448 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.458973 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tcqkm" event={"ID":"f8231c59-b8e7-4f7d-aeb0-888d579425ac","Type":"ContainerStarted","Data":"dec07bbedb97eb11a289c8ed382c2e3903633c6761ea1dcaef1b86485f29afa7"} Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.470811 4992 generic.go:334] "Generic (PLEG): container finished" podID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerID="9b94d824ce485c2d18fdb524cbfe2729238a291857d2b8d4ab09dab5e40ffd55" exitCode=0 Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.470888 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" event={"ID":"d815e9b8-e583-4e4f-91cb-cc3a8f820eed","Type":"ContainerDied","Data":"9b94d824ce485c2d18fdb524cbfe2729238a291857d2b8d4ab09dab5e40ffd55"} Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.470917 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" event={"ID":"d815e9b8-e583-4e4f-91cb-cc3a8f820eed","Type":"ContainerStarted","Data":"a0d0f426021ac08a5a34f3768ff0be36ba308eb0af4481d79e5b9aee4e794128"} Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.501796 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-scripts\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.501892 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87k5h\" (UniqueName: \"kubernetes.io/projected/d0f9640a-de32-425a-b4bb-24618ad6b8b7-kube-api-access-87k5h\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.501921 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0f9640a-de32-425a-b4bb-24618ad6b8b7-horizon-secret-key\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.501951 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f9640a-de32-425a-b4bb-24618ad6b8b7-logs\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.502003 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-config-data\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.507865 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5bd6fd5-lzb8x" event={"ID":"06a1cf3a-2219-4683-8fab-10bee631255d","Type":"ContainerStarted","Data":"5e172187a61f7cf7f1faa13b0ee895b4757f430cdfc34fb979439bc7e31cf82f"} Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.520132 4992 generic.go:334] "Generic (PLEG): container finished" podID="f350b606-93b2-485b-85cd-a705acffd3e1" containerID="f276728f31aa534aab5fb019c758e66dfe0e8b2d98e9b0fa99b6f4126314a0e7" exitCode=0 Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.520193 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" event={"ID":"f350b606-93b2-485b-85cd-a705acffd3e1","Type":"ContainerDied","Data":"f276728f31aa534aab5fb019c758e66dfe0e8b2d98e9b0fa99b6f4126314a0e7"} Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.538496 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-thwnm" event={"ID":"429dae0d-117e-4943-966e-11460f9676b7","Type":"ContainerStarted","Data":"22d72687af96c39323b29da101171820fbb3d544852bf7b6e45acf5c8555cf8e"} Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.538548 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-thwnm" event={"ID":"429dae0d-117e-4943-966e-11460f9676b7","Type":"ContainerStarted","Data":"a95ff77a87e9826a646deb8b48ae2ab75d20e2101e2b691b22a70cbd82c46c5c"} Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.598243 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-thwnm" podStartSLOduration=3.598220648 podStartE2EDuration="3.598220648s" podCreationTimestamp="2025-12-11 08:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:42:36.582652237 +0000 UTC m=+1180.842126163" watchObservedRunningTime="2025-12-11 08:42:36.598220648 +0000 UTC m=+1180.857694574" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.604925 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87k5h\" (UniqueName: \"kubernetes.io/projected/d0f9640a-de32-425a-b4bb-24618ad6b8b7-kube-api-access-87k5h\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.604982 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0f9640a-de32-425a-b4bb-24618ad6b8b7-horizon-secret-key\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.605014 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f9640a-de32-425a-b4bb-24618ad6b8b7-logs\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.605085 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-config-data\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.605117 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-scripts\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.605889 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-scripts\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.606541 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f9640a-de32-425a-b4bb-24618ad6b8b7-logs\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.607901 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-config-data\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.612165 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0f9640a-de32-425a-b4bb-24618ad6b8b7-horizon-secret-key\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.644711 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87k5h\" (UniqueName: \"kubernetes.io/projected/d0f9640a-de32-425a-b4bb-24618ad6b8b7-kube-api-access-87k5h\") pod \"horizon-86697cc57f-6xflj\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:36 crc kubenswrapper[4992]: I1211 08:42:36.758306 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.088101 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.144745 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-swift-storage-0\") pod \"f350b606-93b2-485b-85cd-a705acffd3e1\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.144905 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-sb\") pod \"f350b606-93b2-485b-85cd-a705acffd3e1\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.144985 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-nb\") pod \"f350b606-93b2-485b-85cd-a705acffd3e1\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.145043 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-config\") pod \"f350b606-93b2-485b-85cd-a705acffd3e1\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.145060 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-svc\") pod \"f350b606-93b2-485b-85cd-a705acffd3e1\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.145087 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6hc7\" (UniqueName: \"kubernetes.io/projected/f350b606-93b2-485b-85cd-a705acffd3e1-kube-api-access-t6hc7\") pod \"f350b606-93b2-485b-85cd-a705acffd3e1\" (UID: \"f350b606-93b2-485b-85cd-a705acffd3e1\") " Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.185672 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-config" (OuterVolumeSpecName: "config") pod "f350b606-93b2-485b-85cd-a705acffd3e1" (UID: "f350b606-93b2-485b-85cd-a705acffd3e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.199250 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f350b606-93b2-485b-85cd-a705acffd3e1-kube-api-access-t6hc7" (OuterVolumeSpecName: "kube-api-access-t6hc7") pod "f350b606-93b2-485b-85cd-a705acffd3e1" (UID: "f350b606-93b2-485b-85cd-a705acffd3e1"). InnerVolumeSpecName "kube-api-access-t6hc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.222381 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f350b606-93b2-485b-85cd-a705acffd3e1" (UID: "f350b606-93b2-485b-85cd-a705acffd3e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.241696 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f350b606-93b2-485b-85cd-a705acffd3e1" (UID: "f350b606-93b2-485b-85cd-a705acffd3e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.242102 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f350b606-93b2-485b-85cd-a705acffd3e1" (UID: "f350b606-93b2-485b-85cd-a705acffd3e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.245011 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f350b606-93b2-485b-85cd-a705acffd3e1" (UID: "f350b606-93b2-485b-85cd-a705acffd3e1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.248853 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.248888 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.248902 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.248913 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.248924 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6hc7\" (UniqueName: \"kubernetes.io/projected/f350b606-93b2-485b-85cd-a705acffd3e1-kube-api-access-t6hc7\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.248934 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f350b606-93b2-485b-85cd-a705acffd3e1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.420377 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86697cc57f-6xflj"] Dec 11 08:42:37 crc kubenswrapper[4992]: W1211 08:42:37.439661 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0f9640a_de32_425a_b4bb_24618ad6b8b7.slice/crio-07a0d7cd2fe0c0ff2c70ba3cdb21cadfd31974d8b81cc5c93dd2c814f4415cc6 WatchSource:0}: Error finding container 07a0d7cd2fe0c0ff2c70ba3cdb21cadfd31974d8b81cc5c93dd2c814f4415cc6: Status 404 returned error can't find the container with id 07a0d7cd2fe0c0ff2c70ba3cdb21cadfd31974d8b81cc5c93dd2c814f4415cc6 Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.604493 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86697cc57f-6xflj" event={"ID":"d0f9640a-de32-425a-b4bb-24618ad6b8b7","Type":"ContainerStarted","Data":"07a0d7cd2fe0c0ff2c70ba3cdb21cadfd31974d8b81cc5c93dd2c814f4415cc6"} Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.626167 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" event={"ID":"d815e9b8-e583-4e4f-91cb-cc3a8f820eed","Type":"ContainerStarted","Data":"db67f1215fdbac4f4de6eb3dba8fe572243a6fdc7fe6c122968a6e75a795f341"} Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.626893 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.638874 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.639729 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-w5tjx" event={"ID":"f350b606-93b2-485b-85cd-a705acffd3e1","Type":"ContainerDied","Data":"7d909e0e9200ab8c438f528d7349123ca8d5f9435977883f3f4f0260333a70dd"} Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.639796 4992 scope.go:117] "RemoveContainer" containerID="f276728f31aa534aab5fb019c758e66dfe0e8b2d98e9b0fa99b6f4126314a0e7" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.669180 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" podStartSLOduration=3.669161229 podStartE2EDuration="3.669161229s" podCreationTimestamp="2025-12-11 08:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:42:37.654840888 +0000 UTC m=+1181.914314814" watchObservedRunningTime="2025-12-11 08:42:37.669161229 +0000 UTC m=+1181.928635155" Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.759790 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-w5tjx"] Dec 11 08:42:37 crc kubenswrapper[4992]: I1211 08:42:37.776175 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-w5tjx"] Dec 11 08:42:38 crc kubenswrapper[4992]: I1211 08:42:38.123583 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f350b606-93b2-485b-85cd-a705acffd3e1" path="/var/lib/kubelet/pods/f350b606-93b2-485b-85cd-a705acffd3e1/volumes" Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.823655 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f6c5dcd45-pxlgq"] Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.879840 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55cbdd6686-ddpfq"] Dec 11 08:42:42 crc kubenswrapper[4992]: E1211 08:42:42.880575 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f350b606-93b2-485b-85cd-a705acffd3e1" containerName="init" Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.880600 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f350b606-93b2-485b-85cd-a705acffd3e1" containerName="init" Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.880846 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f350b606-93b2-485b-85cd-a705acffd3e1" containerName="init" Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.887349 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.889147 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.934539 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55cbdd6686-ddpfq"] Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.946643 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86697cc57f-6xflj"] Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.971780 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c6ddf9d4-c2dtv"] Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.973705 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:42 crc kubenswrapper[4992]: I1211 08:42:42.979465 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c6ddf9d4-c2dtv"] Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.073549 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-tls-certs\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.073594 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-scripts\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.073681 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4499ae00-40e9-4f82-a285-b4962cbc3c61-logs\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.073743 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rmlm\" (UniqueName: \"kubernetes.io/projected/4499ae00-40e9-4f82-a285-b4962cbc3c61-kube-api-access-4rmlm\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.073783 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d82648a-9f40-4a60-8532-ec3617de1f45-combined-ca-bundle\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.073799 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-config-data\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.073889 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d82648a-9f40-4a60-8532-ec3617de1f45-scripts\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.073971 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d82648a-9f40-4a60-8532-ec3617de1f45-config-data\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.074050 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d82648a-9f40-4a60-8532-ec3617de1f45-logs\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.074074 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbc7j\" (UniqueName: \"kubernetes.io/projected/1d82648a-9f40-4a60-8532-ec3617de1f45-kube-api-access-rbc7j\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.074100 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d82648a-9f40-4a60-8532-ec3617de1f45-horizon-tls-certs\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.074115 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d82648a-9f40-4a60-8532-ec3617de1f45-horizon-secret-key\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.074135 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-combined-ca-bundle\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.074197 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-secret-key\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176100 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4499ae00-40e9-4f82-a285-b4962cbc3c61-logs\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176172 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rmlm\" (UniqueName: \"kubernetes.io/projected/4499ae00-40e9-4f82-a285-b4962cbc3c61-kube-api-access-4rmlm\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176211 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d82648a-9f40-4a60-8532-ec3617de1f45-combined-ca-bundle\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176226 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-config-data\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176245 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d82648a-9f40-4a60-8532-ec3617de1f45-scripts\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176277 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d82648a-9f40-4a60-8532-ec3617de1f45-config-data\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176315 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d82648a-9f40-4a60-8532-ec3617de1f45-logs\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176332 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbc7j\" (UniqueName: \"kubernetes.io/projected/1d82648a-9f40-4a60-8532-ec3617de1f45-kube-api-access-rbc7j\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176348 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d82648a-9f40-4a60-8532-ec3617de1f45-horizon-tls-certs\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176362 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d82648a-9f40-4a60-8532-ec3617de1f45-horizon-secret-key\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176382 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-combined-ca-bundle\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176408 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-secret-key\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176443 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-tls-certs\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.176462 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-scripts\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.177445 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4499ae00-40e9-4f82-a285-b4962cbc3c61-logs\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.178213 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-scripts\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.178660 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d82648a-9f40-4a60-8532-ec3617de1f45-logs\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.178896 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1d82648a-9f40-4a60-8532-ec3617de1f45-scripts\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.179754 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1d82648a-9f40-4a60-8532-ec3617de1f45-config-data\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.180661 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-config-data\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.183150 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-combined-ca-bundle\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.183483 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-secret-key\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.183559 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-tls-certs\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.184203 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d82648a-9f40-4a60-8532-ec3617de1f45-combined-ca-bundle\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.184784 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d82648a-9f40-4a60-8532-ec3617de1f45-horizon-tls-certs\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.185041 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1d82648a-9f40-4a60-8532-ec3617de1f45-horizon-secret-key\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.192303 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rmlm\" (UniqueName: \"kubernetes.io/projected/4499ae00-40e9-4f82-a285-b4962cbc3c61-kube-api-access-4rmlm\") pod \"horizon-55cbdd6686-ddpfq\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.193448 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbc7j\" (UniqueName: \"kubernetes.io/projected/1d82648a-9f40-4a60-8532-ec3617de1f45-kube-api-access-rbc7j\") pod \"horizon-5c6ddf9d4-c2dtv\" (UID: \"1d82648a-9f40-4a60-8532-ec3617de1f45\") " pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.209587 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:42:43 crc kubenswrapper[4992]: I1211 08:42:43.296077 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:42:44 crc kubenswrapper[4992]: I1211 08:42:44.210086 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c6ddf9d4-c2dtv"] Dec 11 08:42:44 crc kubenswrapper[4992]: W1211 08:42:44.222456 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d82648a_9f40_4a60_8532_ec3617de1f45.slice/crio-76e42fb984abfe1e36c8a491946605f5d8401c72c3d924d972ed3232f5e0ba10 WatchSource:0}: Error finding container 76e42fb984abfe1e36c8a491946605f5d8401c72c3d924d972ed3232f5e0ba10: Status 404 returned error can't find the container with id 76e42fb984abfe1e36c8a491946605f5d8401c72c3d924d972ed3232f5e0ba10 Dec 11 08:42:44 crc kubenswrapper[4992]: I1211 08:42:44.339696 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55cbdd6686-ddpfq"] Dec 11 08:42:44 crc kubenswrapper[4992]: W1211 08:42:44.349762 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4499ae00_40e9_4f82_a285_b4962cbc3c61.slice/crio-fe2dd8dccf4ee282c6008cd9313473da34ed47bc0b622b48fd1db5dd5cf86440 WatchSource:0}: Error finding container fe2dd8dccf4ee282c6008cd9313473da34ed47bc0b622b48fd1db5dd5cf86440: Status 404 returned error can't find the container with id fe2dd8dccf4ee282c6008cd9313473da34ed47bc0b622b48fd1db5dd5cf86440 Dec 11 08:42:44 crc kubenswrapper[4992]: I1211 08:42:44.708669 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cbdd6686-ddpfq" event={"ID":"4499ae00-40e9-4f82-a285-b4962cbc3c61","Type":"ContainerStarted","Data":"fe2dd8dccf4ee282c6008cd9313473da34ed47bc0b622b48fd1db5dd5cf86440"} Dec 11 08:42:44 crc kubenswrapper[4992]: I1211 08:42:44.710246 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c6ddf9d4-c2dtv" event={"ID":"1d82648a-9f40-4a60-8532-ec3617de1f45","Type":"ContainerStarted","Data":"76e42fb984abfe1e36c8a491946605f5d8401c72c3d924d972ed3232f5e0ba10"} Dec 11 08:42:44 crc kubenswrapper[4992]: I1211 08:42:44.820385 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:42:44 crc kubenswrapper[4992]: I1211 08:42:44.893336 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d5c6v"] Dec 11 08:42:44 crc kubenswrapper[4992]: I1211 08:42:44.894900 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-d5c6v" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="dnsmasq-dns" containerID="cri-o://6004d367fdde689658b0eb9cc2eb4dec41e5a972c9aeb6c12d93325e6ff7a000" gracePeriod=10 Dec 11 08:42:47 crc kubenswrapper[4992]: I1211 08:42:47.016280 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-d5c6v" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Dec 11 08:42:50 crc kubenswrapper[4992]: I1211 08:42:50.757778 4992 generic.go:334] "Generic (PLEG): container finished" podID="4c0a2193-f200-40ba-9038-68c99922f75a" containerID="6004d367fdde689658b0eb9cc2eb4dec41e5a972c9aeb6c12d93325e6ff7a000" exitCode=0 Dec 11 08:42:50 crc kubenswrapper[4992]: I1211 08:42:50.758345 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d5c6v" event={"ID":"4c0a2193-f200-40ba-9038-68c99922f75a","Type":"ContainerDied","Data":"6004d367fdde689658b0eb9cc2eb4dec41e5a972c9aeb6c12d93325e6ff7a000"} Dec 11 08:42:51 crc kubenswrapper[4992]: E1211 08:42:51.045188 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage2852222140/1\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 11 08:42:51 crc kubenswrapper[4992]: E1211 08:42:51.045504 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7dh78h5fch5h5d6h5dbh677hb6h69h648h58h74h564h5bbhdh5c6h694h6ch557h5f9h544h99hfdhd9h7dh59dhdbh99h8fhc8h5fbh65cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87k5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-86697cc57f-6xflj_openstack(d0f9640a-de32-425a-b4bb-24618ad6b8b7): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage2852222140/1\": happened during read: context canceled" logger="UnhandledError" Dec 11 08:42:51 crc kubenswrapper[4992]: E1211 08:42:51.047650 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage2852222140/1\\\": happened during read: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-86697cc57f-6xflj" podUID="d0f9640a-de32-425a-b4bb-24618ad6b8b7" Dec 11 08:42:52 crc kubenswrapper[4992]: I1211 08:42:52.016565 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-d5c6v" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Dec 11 08:42:55 crc kubenswrapper[4992]: I1211 08:42:55.820302 4992 generic.go:334] "Generic (PLEG): container finished" podID="324347da-2503-407c-8a8b-49ded124f2a4" containerID="7cb784d206c5ffadbdbc364d7da11f757d984a6faece0f465dc5454e5623b3d7" exitCode=0 Dec 11 08:42:55 crc kubenswrapper[4992]: I1211 08:42:55.820382 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g2rsp" event={"ID":"324347da-2503-407c-8a8b-49ded124f2a4","Type":"ContainerDied","Data":"7cb784d206c5ffadbdbc364d7da11f757d984a6faece0f465dc5454e5623b3d7"} Dec 11 08:42:57 crc kubenswrapper[4992]: I1211 08:42:57.016821 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-d5c6v" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Dec 11 08:42:57 crc kubenswrapper[4992]: I1211 08:42:57.017511 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:43:02 crc kubenswrapper[4992]: I1211 08:43:02.016716 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-d5c6v" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Dec 11 08:43:02 crc kubenswrapper[4992]: E1211 08:43:02.137680 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 11 08:43:02 crc kubenswrapper[4992]: E1211 08:43:02.137865 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd7h579hbh569h59h68fh65bh667h655h64fhch587hf4h76h65bh84h65fh5ddh5fh6dh7ch549h66ch5cbh6bh5f6h5d7h5cbh5bfh648hfch647q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brlsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5f6c5dcd45-pxlgq_openstack(98801adb-78c9-422e-a24b-1e8082db71f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:43:02 crc kubenswrapper[4992]: E1211 08:43:02.142524 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5f6c5dcd45-pxlgq" podUID="98801adb-78c9-422e-a24b-1e8082db71f7" Dec 11 08:43:02 crc kubenswrapper[4992]: I1211 08:43:02.894627 4992 generic.go:334] "Generic (PLEG): container finished" podID="19dbc853-9df9-491e-af8d-6c13547cd478" containerID="bec3e547db1d2714df96d3b7f8490f9777a225f5620875807246aab96a61b628" exitCode=0 Dec 11 08:43:02 crc kubenswrapper[4992]: I1211 08:43:02.894717 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fx6rx" event={"ID":"19dbc853-9df9-491e-af8d-6c13547cd478","Type":"ContainerDied","Data":"bec3e547db1d2714df96d3b7f8490f9777a225f5620875807246aab96a61b628"} Dec 11 08:43:03 crc kubenswrapper[4992]: E1211 08:43:03.674603 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 11 08:43:03 crc kubenswrapper[4992]: E1211 08:43:03.675121 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hkt9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-tcqkm_openstack(f8231c59-b8e7-4f7d-aeb0-888d579425ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:43:03 crc kubenswrapper[4992]: E1211 08:43:03.677764 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-tcqkm" podUID="f8231c59-b8e7-4f7d-aeb0-888d579425ac" Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.776550 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.900340 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-config-data\") pod \"98801adb-78c9-422e-a24b-1e8082db71f7\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.900490 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-scripts\") pod \"98801adb-78c9-422e-a24b-1e8082db71f7\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.900531 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98801adb-78c9-422e-a24b-1e8082db71f7-horizon-secret-key\") pod \"98801adb-78c9-422e-a24b-1e8082db71f7\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.900555 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98801adb-78c9-422e-a24b-1e8082db71f7-logs\") pod \"98801adb-78c9-422e-a24b-1e8082db71f7\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.900586 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brlsc\" (UniqueName: \"kubernetes.io/projected/98801adb-78c9-422e-a24b-1e8082db71f7-kube-api-access-brlsc\") pod \"98801adb-78c9-422e-a24b-1e8082db71f7\" (UID: \"98801adb-78c9-422e-a24b-1e8082db71f7\") " Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.901390 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98801adb-78c9-422e-a24b-1e8082db71f7-logs" (OuterVolumeSpecName: "logs") pod "98801adb-78c9-422e-a24b-1e8082db71f7" (UID: "98801adb-78c9-422e-a24b-1e8082db71f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.901427 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-config-data" (OuterVolumeSpecName: "config-data") pod "98801adb-78c9-422e-a24b-1e8082db71f7" (UID: "98801adb-78c9-422e-a24b-1e8082db71f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.901414 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-scripts" (OuterVolumeSpecName: "scripts") pod "98801adb-78c9-422e-a24b-1e8082db71f7" (UID: "98801adb-78c9-422e-a24b-1e8082db71f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.906819 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f6c5dcd45-pxlgq" Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.907242 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f6c5dcd45-pxlgq" event={"ID":"98801adb-78c9-422e-a24b-1e8082db71f7","Type":"ContainerDied","Data":"b9529e774eda2d4f341c70bcc0374812920e6e8d72fd11db450159047df905db"} Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.907395 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98801adb-78c9-422e-a24b-1e8082db71f7-kube-api-access-brlsc" (OuterVolumeSpecName: "kube-api-access-brlsc") pod "98801adb-78c9-422e-a24b-1e8082db71f7" (UID: "98801adb-78c9-422e-a24b-1e8082db71f7"). InnerVolumeSpecName "kube-api-access-brlsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:03 crc kubenswrapper[4992]: I1211 08:43:03.908563 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98801adb-78c9-422e-a24b-1e8082db71f7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "98801adb-78c9-422e-a24b-1e8082db71f7" (UID: "98801adb-78c9-422e-a24b-1e8082db71f7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:03 crc kubenswrapper[4992]: E1211 08:43:03.922683 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-tcqkm" podUID="f8231c59-b8e7-4f7d-aeb0-888d579425ac" Dec 11 08:43:04 crc kubenswrapper[4992]: I1211 08:43:04.002587 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:04 crc kubenswrapper[4992]: I1211 08:43:04.002990 4992 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98801adb-78c9-422e-a24b-1e8082db71f7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:04 crc kubenswrapper[4992]: I1211 08:43:04.003005 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98801adb-78c9-422e-a24b-1e8082db71f7-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:04 crc kubenswrapper[4992]: I1211 08:43:04.003019 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brlsc\" (UniqueName: \"kubernetes.io/projected/98801adb-78c9-422e-a24b-1e8082db71f7-kube-api-access-brlsc\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:04 crc kubenswrapper[4992]: I1211 08:43:04.003046 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98801adb-78c9-422e-a24b-1e8082db71f7-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:04 crc kubenswrapper[4992]: E1211 08:43:04.171848 4992 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98801adb_78c9_422e_a24b_1e8082db71f7.slice/crio-b9529e774eda2d4f341c70bcc0374812920e6e8d72fd11db450159047df905db\": RecentStats: unable to find data in memory cache]" Dec 11 08:43:04 crc kubenswrapper[4992]: I1211 08:43:04.266380 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f6c5dcd45-pxlgq"] Dec 11 08:43:04 crc kubenswrapper[4992]: I1211 08:43:04.272441 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5f6c5dcd45-pxlgq"] Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.379254 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.379644 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.379699 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.380390 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d64a7d32f88a68b108a9286da7fc154fed7c669f9f13fdf26c97611e89c34eb5"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.380445 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://d64a7d32f88a68b108a9286da7fc154fed7c669f9f13fdf26c97611e89c34eb5" gracePeriod=600 Dec 11 08:43:05 crc kubenswrapper[4992]: E1211 08:43:05.594799 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 11 08:43:05 crc kubenswrapper[4992]: E1211 08:43:05.594987 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6hbdh548h7h594h66ch58h695h5c8h5b9h5c7h68h99h66dh57fh9bh596hcch54h646hcdh5bdh669hf8h84hdchc7hf5hb9h545h6fh68dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hwb6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5b5bd6fd5-lzb8x_openstack(06a1cf3a-2219-4683-8fab-10bee631255d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:43:05 crc kubenswrapper[4992]: E1211 08:43:05.602249 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5b5bd6fd5-lzb8x" podUID="06a1cf3a-2219-4683-8fab-10bee631255d" Dec 11 08:43:05 crc kubenswrapper[4992]: E1211 08:43:05.887428 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 11 08:43:05 crc kubenswrapper[4992]: E1211 08:43:05.887963 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fdh58bhch5d9h88h668h5d5h57dh5cfh59bh5c9h575h5d4h654h7hbfh8fh5f5h686h684h5cfh8h58fh576h559h59dh5bh5f4h57fhc4hffh77q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qv57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(65182c76-fea3-4f83-b03f-bfce48989e82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.948554 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86697cc57f-6xflj" event={"ID":"d0f9640a-de32-425a-b4bb-24618ad6b8b7","Type":"ContainerDied","Data":"07a0d7cd2fe0c0ff2c70ba3cdb21cadfd31974d8b81cc5c93dd2c814f4415cc6"} Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.948597 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a0d7cd2fe0c0ff2c70ba3cdb21cadfd31974d8b81cc5c93dd2c814f4415cc6" Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.952242 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-g2rsp" event={"ID":"324347da-2503-407c-8a8b-49ded124f2a4","Type":"ContainerDied","Data":"6772c1140222b463a9df57ae9ee652abdf935c08d1388949ea2eb5ee0e76204a"} Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.952292 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6772c1140222b463a9df57ae9ee652abdf935c08d1388949ea2eb5ee0e76204a" Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.955593 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="d64a7d32f88a68b108a9286da7fc154fed7c669f9f13fdf26c97611e89c34eb5" exitCode=0 Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.955754 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"d64a7d32f88a68b108a9286da7fc154fed7c669f9f13fdf26c97611e89c34eb5"} Dec 11 08:43:05 crc kubenswrapper[4992]: I1211 08:43:05.955789 4992 scope.go:117] "RemoveContainer" containerID="052d1b39952568f7c7dadc00d816b97c8f69c2e12d851ed0f8503ebf05896a23" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.005993 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.023743 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.127840 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98801adb-78c9-422e-a24b-1e8082db71f7" path="/var/lib/kubelet/pods/98801adb-78c9-422e-a24b-1e8082db71f7/volumes" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141275 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-credential-keys\") pod \"324347da-2503-407c-8a8b-49ded124f2a4\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141336 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-config-data\") pod \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141375 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-combined-ca-bundle\") pod \"324347da-2503-407c-8a8b-49ded124f2a4\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141479 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f9640a-de32-425a-b4bb-24618ad6b8b7-logs\") pod \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141548 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-config-data\") pod \"324347da-2503-407c-8a8b-49ded124f2a4\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141593 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0f9640a-de32-425a-b4bb-24618ad6b8b7-horizon-secret-key\") pod \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141621 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6npfz\" (UniqueName: \"kubernetes.io/projected/324347da-2503-407c-8a8b-49ded124f2a4-kube-api-access-6npfz\") pod \"324347da-2503-407c-8a8b-49ded124f2a4\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141695 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-scripts\") pod \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141732 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-fernet-keys\") pod \"324347da-2503-407c-8a8b-49ded124f2a4\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141771 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-scripts\") pod \"324347da-2503-407c-8a8b-49ded124f2a4\" (UID: \"324347da-2503-407c-8a8b-49ded124f2a4\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.141830 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87k5h\" (UniqueName: \"kubernetes.io/projected/d0f9640a-de32-425a-b4bb-24618ad6b8b7-kube-api-access-87k5h\") pod \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\" (UID: \"d0f9640a-de32-425a-b4bb-24618ad6b8b7\") " Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.142027 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-config-data" (OuterVolumeSpecName: "config-data") pod "d0f9640a-de32-425a-b4bb-24618ad6b8b7" (UID: "d0f9640a-de32-425a-b4bb-24618ad6b8b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.142476 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.145260 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f9640a-de32-425a-b4bb-24618ad6b8b7-logs" (OuterVolumeSpecName: "logs") pod "d0f9640a-de32-425a-b4bb-24618ad6b8b7" (UID: "d0f9640a-de32-425a-b4bb-24618ad6b8b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.146529 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-scripts" (OuterVolumeSpecName: "scripts") pod "d0f9640a-de32-425a-b4bb-24618ad6b8b7" (UID: "d0f9640a-de32-425a-b4bb-24618ad6b8b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.150490 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/324347da-2503-407c-8a8b-49ded124f2a4-kube-api-access-6npfz" (OuterVolumeSpecName: "kube-api-access-6npfz") pod "324347da-2503-407c-8a8b-49ded124f2a4" (UID: "324347da-2503-407c-8a8b-49ded124f2a4"). InnerVolumeSpecName "kube-api-access-6npfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.153807 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "324347da-2503-407c-8a8b-49ded124f2a4" (UID: "324347da-2503-407c-8a8b-49ded124f2a4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.157108 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-scripts" (OuterVolumeSpecName: "scripts") pod "324347da-2503-407c-8a8b-49ded124f2a4" (UID: "324347da-2503-407c-8a8b-49ded124f2a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.157127 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f9640a-de32-425a-b4bb-24618ad6b8b7-kube-api-access-87k5h" (OuterVolumeSpecName: "kube-api-access-87k5h") pod "d0f9640a-de32-425a-b4bb-24618ad6b8b7" (UID: "d0f9640a-de32-425a-b4bb-24618ad6b8b7"). InnerVolumeSpecName "kube-api-access-87k5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.163249 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "324347da-2503-407c-8a8b-49ded124f2a4" (UID: "324347da-2503-407c-8a8b-49ded124f2a4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.163367 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f9640a-de32-425a-b4bb-24618ad6b8b7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d0f9640a-de32-425a-b4bb-24618ad6b8b7" (UID: "d0f9640a-de32-425a-b4bb-24618ad6b8b7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.177186 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-config-data" (OuterVolumeSpecName: "config-data") pod "324347da-2503-407c-8a8b-49ded124f2a4" (UID: "324347da-2503-407c-8a8b-49ded124f2a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.178421 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "324347da-2503-407c-8a8b-49ded124f2a4" (UID: "324347da-2503-407c-8a8b-49ded124f2a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244655 4992 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244703 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244717 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0f9640a-de32-425a-b4bb-24618ad6b8b7-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244729 4992 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d0f9640a-de32-425a-b4bb-24618ad6b8b7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244741 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244752 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6npfz\" (UniqueName: \"kubernetes.io/projected/324347da-2503-407c-8a8b-49ded124f2a4-kube-api-access-6npfz\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244765 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d0f9640a-de32-425a-b4bb-24618ad6b8b7-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244776 4992 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244786 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/324347da-2503-407c-8a8b-49ded124f2a4-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.244797 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87k5h\" (UniqueName: \"kubernetes.io/projected/d0f9640a-de32-425a-b4bb-24618ad6b8b7-kube-api-access-87k5h\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.962266 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-g2rsp" Dec 11 08:43:06 crc kubenswrapper[4992]: I1211 08:43:06.962310 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86697cc57f-6xflj" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.093213 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.124169 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86697cc57f-6xflj"] Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.133321 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-86697cc57f-6xflj"] Dec 11 08:43:07 crc kubenswrapper[4992]: E1211 08:43:07.143424 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 11 08:43:07 crc kubenswrapper[4992]: E1211 08:43:07.143593 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpt6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-8mmcr_openstack(73c99101-825a-4a3b-acf0-7fc522f3631f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:43:07 crc kubenswrapper[4992]: E1211 08:43:07.144797 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-8mmcr" podUID="73c99101-825a-4a3b-acf0-7fc522f3631f" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.206391 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-g2rsp"] Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.213854 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-g2rsp"] Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.261418 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-dns-svc\") pod \"4c0a2193-f200-40ba-9038-68c99922f75a\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.261522 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-nb\") pod \"4c0a2193-f200-40ba-9038-68c99922f75a\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.261571 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9ssz\" (UniqueName: \"kubernetes.io/projected/4c0a2193-f200-40ba-9038-68c99922f75a-kube-api-access-g9ssz\") pod \"4c0a2193-f200-40ba-9038-68c99922f75a\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.261600 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-config\") pod \"4c0a2193-f200-40ba-9038-68c99922f75a\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.261701 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-sb\") pod \"4c0a2193-f200-40ba-9038-68c99922f75a\" (UID: \"4c0a2193-f200-40ba-9038-68c99922f75a\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.275197 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0a2193-f200-40ba-9038-68c99922f75a-kube-api-access-g9ssz" (OuterVolumeSpecName: "kube-api-access-g9ssz") pod "4c0a2193-f200-40ba-9038-68c99922f75a" (UID: "4c0a2193-f200-40ba-9038-68c99922f75a"). InnerVolumeSpecName "kube-api-access-g9ssz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.318742 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4c0a2193-f200-40ba-9038-68c99922f75a" (UID: "4c0a2193-f200-40ba-9038-68c99922f75a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.328054 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4c0a2193-f200-40ba-9038-68c99922f75a" (UID: "4c0a2193-f200-40ba-9038-68c99922f75a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.328833 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4c0a2193-f200-40ba-9038-68c99922f75a" (UID: "4c0a2193-f200-40ba-9038-68c99922f75a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.334795 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-smb2h"] Dec 11 08:43:07 crc kubenswrapper[4992]: E1211 08:43:07.335207 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="dnsmasq-dns" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.335222 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="dnsmasq-dns" Dec 11 08:43:07 crc kubenswrapper[4992]: E1211 08:43:07.335247 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="324347da-2503-407c-8a8b-49ded124f2a4" containerName="keystone-bootstrap" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.335254 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="324347da-2503-407c-8a8b-49ded124f2a4" containerName="keystone-bootstrap" Dec 11 08:43:07 crc kubenswrapper[4992]: E1211 08:43:07.335267 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="init" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.335274 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="init" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.335434 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="324347da-2503-407c-8a8b-49ded124f2a4" containerName="keystone-bootstrap" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.335449 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="dnsmasq-dns" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.336092 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.337990 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xs4g8" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.338250 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-config" (OuterVolumeSpecName: "config") pod "4c0a2193-f200-40ba-9038-68c99922f75a" (UID: "4c0a2193-f200-40ba-9038-68c99922f75a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.338305 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.338440 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.338687 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.338862 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.351449 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-smb2h"] Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.363671 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.363712 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9ssz\" (UniqueName: \"kubernetes.io/projected/4c0a2193-f200-40ba-9038-68c99922f75a-kube-api-access-g9ssz\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.363731 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.363742 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.363755 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4c0a2193-f200-40ba-9038-68c99922f75a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.465886 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-credential-keys\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.465989 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-combined-ca-bundle\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.466111 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtvd\" (UniqueName: \"kubernetes.io/projected/6952512b-7da5-4bc5-b91f-bbeb61056854-kube-api-access-smtvd\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.466184 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-scripts\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.466243 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-config-data\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.466375 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-fernet-keys\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.568951 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-credential-keys\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.569027 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-combined-ca-bundle\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.569071 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smtvd\" (UniqueName: \"kubernetes.io/projected/6952512b-7da5-4bc5-b91f-bbeb61056854-kube-api-access-smtvd\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.569097 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-scripts\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.569131 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-config-data\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.569186 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-fernet-keys\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.573865 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-combined-ca-bundle\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.573992 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-credential-keys\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.574116 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-fernet-keys\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.574439 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-scripts\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.574439 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-config-data\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.586660 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtvd\" (UniqueName: \"kubernetes.io/projected/6952512b-7da5-4bc5-b91f-bbeb61056854-kube-api-access-smtvd\") pod \"keystone-bootstrap-smb2h\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.709405 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.826164 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fx6rx" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.832527 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.972387 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fx6rx" event={"ID":"19dbc853-9df9-491e-af8d-6c13547cd478","Type":"ContainerDied","Data":"9f150d41b8b5d5b6cadb3c0c5cf3a09bf297a14a72cd8bfd7647da52a6c580af"} Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.972444 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f150d41b8b5d5b6cadb3c0c5cf3a09bf297a14a72cd8bfd7647da52a6c580af" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.972553 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fx6rx" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.974370 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-d5c6v" event={"ID":"4c0a2193-f200-40ba-9038-68c99922f75a","Type":"ContainerDied","Data":"7609d77e074ccef179d1a077b49f1a15242f838c6bc9ad1c337cc2219cceb1c7"} Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.974533 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-d5c6v" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.975627 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-config-data\") pod \"19dbc853-9df9-491e-af8d-6c13547cd478\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.975719 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-scripts\") pod \"06a1cf3a-2219-4683-8fab-10bee631255d\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.975771 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a1cf3a-2219-4683-8fab-10bee631255d-logs\") pod \"06a1cf3a-2219-4683-8fab-10bee631255d\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.975794 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwb6x\" (UniqueName: \"kubernetes.io/projected/06a1cf3a-2219-4683-8fab-10bee631255d-kube-api-access-hwb6x\") pod \"06a1cf3a-2219-4683-8fab-10bee631255d\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.975853 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-db-sync-config-data\") pod \"19dbc853-9df9-491e-af8d-6c13547cd478\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.975894 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-combined-ca-bundle\") pod \"19dbc853-9df9-491e-af8d-6c13547cd478\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.975922 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-config-data\") pod \"06a1cf3a-2219-4683-8fab-10bee631255d\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.975950 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2d95\" (UniqueName: \"kubernetes.io/projected/19dbc853-9df9-491e-af8d-6c13547cd478-kube-api-access-d2d95\") pod \"19dbc853-9df9-491e-af8d-6c13547cd478\" (UID: \"19dbc853-9df9-491e-af8d-6c13547cd478\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.976032 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/06a1cf3a-2219-4683-8fab-10bee631255d-horizon-secret-key\") pod \"06a1cf3a-2219-4683-8fab-10bee631255d\" (UID: \"06a1cf3a-2219-4683-8fab-10bee631255d\") " Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.979425 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06a1cf3a-2219-4683-8fab-10bee631255d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "06a1cf3a-2219-4683-8fab-10bee631255d" (UID: "06a1cf3a-2219-4683-8fab-10bee631255d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.982485 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5bd6fd5-lzb8x" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.982495 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5bd6fd5-lzb8x" event={"ID":"06a1cf3a-2219-4683-8fab-10bee631255d","Type":"ContainerDied","Data":"5e172187a61f7cf7f1faa13b0ee895b4757f430cdfc34fb979439bc7e31cf82f"} Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.982801 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06a1cf3a-2219-4683-8fab-10bee631255d-logs" (OuterVolumeSpecName: "logs") pod "06a1cf3a-2219-4683-8fab-10bee631255d" (UID: "06a1cf3a-2219-4683-8fab-10bee631255d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.982901 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-scripts" (OuterVolumeSpecName: "scripts") pod "06a1cf3a-2219-4683-8fab-10bee631255d" (UID: "06a1cf3a-2219-4683-8fab-10bee631255d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.983786 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-config-data" (OuterVolumeSpecName: "config-data") pod "06a1cf3a-2219-4683-8fab-10bee631255d" (UID: "06a1cf3a-2219-4683-8fab-10bee631255d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.986789 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "19dbc853-9df9-491e-af8d-6c13547cd478" (UID: "19dbc853-9df9-491e-af8d-6c13547cd478"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.990483 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a1cf3a-2219-4683-8fab-10bee631255d-kube-api-access-hwb6x" (OuterVolumeSpecName: "kube-api-access-hwb6x") pod "06a1cf3a-2219-4683-8fab-10bee631255d" (UID: "06a1cf3a-2219-4683-8fab-10bee631255d"). InnerVolumeSpecName "kube-api-access-hwb6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:07 crc kubenswrapper[4992]: I1211 08:43:07.991098 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19dbc853-9df9-491e-af8d-6c13547cd478-kube-api-access-d2d95" (OuterVolumeSpecName: "kube-api-access-d2d95") pod "19dbc853-9df9-491e-af8d-6c13547cd478" (UID: "19dbc853-9df9-491e-af8d-6c13547cd478"). InnerVolumeSpecName "kube-api-access-d2d95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.008734 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19dbc853-9df9-491e-af8d-6c13547cd478" (UID: "19dbc853-9df9-491e-af8d-6c13547cd478"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.029917 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-config-data" (OuterVolumeSpecName: "config-data") pod "19dbc853-9df9-491e-af8d-6c13547cd478" (UID: "19dbc853-9df9-491e-af8d-6c13547cd478"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.078783 4992 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/06a1cf3a-2219-4683-8fab-10bee631255d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.082079 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.082114 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.082127 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06a1cf3a-2219-4683-8fab-10bee631255d-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.082142 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwb6x\" (UniqueName: \"kubernetes.io/projected/06a1cf3a-2219-4683-8fab-10bee631255d-kube-api-access-hwb6x\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.082156 4992 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.082168 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dbc853-9df9-491e-af8d-6c13547cd478-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.082179 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/06a1cf3a-2219-4683-8fab-10bee631255d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.082190 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2d95\" (UniqueName: \"kubernetes.io/projected/19dbc853-9df9-491e-af8d-6c13547cd478-kube-api-access-d2d95\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.111298 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="324347da-2503-407c-8a8b-49ded124f2a4" path="/var/lib/kubelet/pods/324347da-2503-407c-8a8b-49ded124f2a4/volumes" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.112008 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f9640a-de32-425a-b4bb-24618ad6b8b7" path="/var/lib/kubelet/pods/d0f9640a-de32-425a-b4bb-24618ad6b8b7/volumes" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.115164 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d5c6v"] Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.124772 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-d5c6v"] Dec 11 08:43:08 crc kubenswrapper[4992]: E1211 08:43:08.241652 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-8mmcr" podUID="73c99101-825a-4a3b-acf0-7fc522f3631f" Dec 11 08:43:08 crc kubenswrapper[4992]: E1211 08:43:08.250041 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 11 08:43:08 crc kubenswrapper[4992]: E1211 08:43:08.250528 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4c8bl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-v95pn_openstack(afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:43:08 crc kubenswrapper[4992]: E1211 08:43:08.251776 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-v95pn" podUID="afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.306363 4992 scope.go:117] "RemoveContainer" containerID="6004d367fdde689658b0eb9cc2eb4dec41e5a972c9aeb6c12d93325e6ff7a000" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.341512 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5bd6fd5-lzb8x"] Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.348819 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b5bd6fd5-lzb8x"] Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.913124 4992 scope.go:117] "RemoveContainer" containerID="56fe2d93d804795df924f20b91f8a282889092bd038aa4c88ca421ebac3e98d3" Dec 11 08:43:08 crc kubenswrapper[4992]: I1211 08:43:08.957326 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-smb2h"] Dec 11 08:43:08 crc kubenswrapper[4992]: W1211 08:43:08.978993 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6952512b_7da5_4bc5_b91f_bbeb61056854.slice/crio-60fa9a9a4109f9ce10208231f2db1a0b64dd38e67699a38ad590f26fc9dba543 WatchSource:0}: Error finding container 60fa9a9a4109f9ce10208231f2db1a0b64dd38e67699a38ad590f26fc9dba543: Status 404 returned error can't find the container with id 60fa9a9a4109f9ce10208231f2db1a0b64dd38e67699a38ad590f26fc9dba543 Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.000158 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-smb2h" event={"ID":"6952512b-7da5-4bc5-b91f-bbeb61056854","Type":"ContainerStarted","Data":"60fa9a9a4109f9ce10208231f2db1a0b64dd38e67699a38ad590f26fc9dba543"} Dec 11 08:43:09 crc kubenswrapper[4992]: E1211 08:43:09.006316 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-v95pn" podUID="afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.267490 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s8vfc"] Dec 11 08:43:09 crc kubenswrapper[4992]: E1211 08:43:09.268074 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19dbc853-9df9-491e-af8d-6c13547cd478" containerName="glance-db-sync" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.268093 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dbc853-9df9-491e-af8d-6c13547cd478" containerName="glance-db-sync" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.268273 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="19dbc853-9df9-491e-af8d-6c13547cd478" containerName="glance-db-sync" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.269195 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.291591 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s8vfc"] Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.408738 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-config\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.408862 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.408898 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.408956 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7x45\" (UniqueName: \"kubernetes.io/projected/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-kube-api-access-t7x45\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.409125 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.409328 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.511565 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.511667 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-config\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.511696 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.511731 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.511778 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7x45\" (UniqueName: \"kubernetes.io/projected/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-kube-api-access-t7x45\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.511831 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.513762 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.513832 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.514610 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-config\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.514655 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.514871 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.536773 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7x45\" (UniqueName: \"kubernetes.io/projected/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-kube-api-access-t7x45\") pod \"dnsmasq-dns-8b5c85b87-s8vfc\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:09 crc kubenswrapper[4992]: I1211 08:43:09.616059 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.011956 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"c9b7b9751f69dacb432c6111e285d3e2e47bd2a6e7fe288f0f982e3e58b7bafb"} Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.014441 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cbdd6686-ddpfq" event={"ID":"4499ae00-40e9-4f82-a285-b4962cbc3c61","Type":"ContainerStarted","Data":"271c57eec089769bbb4ceebf12e5b27ed20fe1df01b7ed3ff394d432f49782ec"} Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.014495 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cbdd6686-ddpfq" event={"ID":"4499ae00-40e9-4f82-a285-b4962cbc3c61","Type":"ContainerStarted","Data":"d65b0fba99de8e51ce552bd8878515eec8d287c8f746937db41d8b1365b67a0e"} Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.018080 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c6ddf9d4-c2dtv" event={"ID":"1d82648a-9f40-4a60-8532-ec3617de1f45","Type":"ContainerStarted","Data":"22d3606164ec80991f5a96e716b6aaae57f8bec196189075b80432ee61828ae3"} Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.018128 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c6ddf9d4-c2dtv" event={"ID":"1d82648a-9f40-4a60-8532-ec3617de1f45","Type":"ContainerStarted","Data":"99a58be98564489d7328fec0e8cf1f1352612d99c8d6121644781947ea72f687"} Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.021124 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-smb2h" event={"ID":"6952512b-7da5-4bc5-b91f-bbeb61056854","Type":"ContainerStarted","Data":"4f96a2f0ff0421ec2021b5ff15e71261702f804f2987ff43bb3f347dabb4dc10"} Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.023966 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65182c76-fea3-4f83-b03f-bfce48989e82","Type":"ContainerStarted","Data":"b2d3e4d2e363ce847d275f8c9d4147ea923added3a98b3531764f735c952f8a3"} Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.108460 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a1cf3a-2219-4683-8fab-10bee631255d" path="/var/lib/kubelet/pods/06a1cf3a-2219-4683-8fab-10bee631255d/volumes" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.112829 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" path="/var/lib/kubelet/pods/4c0a2193-f200-40ba-9038-68c99922f75a/volumes" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.126652 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-55cbdd6686-ddpfq" podStartSLOduration=4.18477488 podStartE2EDuration="28.126610642s" podCreationTimestamp="2025-12-11 08:42:42 +0000 UTC" firstStartedPulling="2025-12-11 08:42:44.352374865 +0000 UTC m=+1188.611848791" lastFinishedPulling="2025-12-11 08:43:08.294210627 +0000 UTC m=+1212.553684553" observedRunningTime="2025-12-11 08:43:10.125350051 +0000 UTC m=+1214.384823997" watchObservedRunningTime="2025-12-11 08:43:10.126610642 +0000 UTC m=+1214.386084568" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.170950 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-smb2h" podStartSLOduration=3.170934108 podStartE2EDuration="3.170934108s" podCreationTimestamp="2025-12-11 08:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:43:10.168164551 +0000 UTC m=+1214.427638487" watchObservedRunningTime="2025-12-11 08:43:10.170934108 +0000 UTC m=+1214.430408034" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.216195 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c6ddf9d4-c2dtv" podStartSLOduration=3.896498324 podStartE2EDuration="28.216179447s" podCreationTimestamp="2025-12-11 08:42:42 +0000 UTC" firstStartedPulling="2025-12-11 08:42:44.224710306 +0000 UTC m=+1188.484184232" lastFinishedPulling="2025-12-11 08:43:08.544391419 +0000 UTC m=+1212.803865355" observedRunningTime="2025-12-11 08:43:10.206855769 +0000 UTC m=+1214.466329695" watchObservedRunningTime="2025-12-11 08:43:10.216179447 +0000 UTC m=+1214.475653373" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.225607 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s8vfc"] Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.315077 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.320649 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.327484 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.328528 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-pt75q" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.328795 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.330676 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.387240 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.390527 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.392681 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.402742 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.431817 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.431869 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffcm8\" (UniqueName: \"kubernetes.io/projected/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-kube-api-access-ffcm8\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.431916 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.431996 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-logs\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.432030 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.432053 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.432191 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.534600 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzwf\" (UniqueName: \"kubernetes.io/projected/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-kube-api-access-gbzwf\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.534975 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535054 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-logs\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535076 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535098 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-logs\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535127 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535151 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535192 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535230 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535264 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535297 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535340 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535414 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.535443 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffcm8\" (UniqueName: \"kubernetes.io/projected/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-kube-api-access-ffcm8\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.536445 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.536451 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.536488 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-logs\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.542780 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.544212 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.551882 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.556841 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffcm8\" (UniqueName: \"kubernetes.io/projected/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-kube-api-access-ffcm8\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.564591 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.636680 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.636733 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-logs\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.636785 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.636832 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.636855 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.636931 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.637014 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbzwf\" (UniqueName: \"kubernetes.io/projected/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-kube-api-access-gbzwf\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.637985 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.638309 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-logs\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.638778 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.645935 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.646415 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.648569 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.659254 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbzwf\" (UniqueName: \"kubernetes.io/projected/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-kube-api-access-gbzwf\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.669094 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.716707 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:43:10 crc kubenswrapper[4992]: I1211 08:43:10.728013 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:11 crc kubenswrapper[4992]: I1211 08:43:11.037025 4992 generic.go:334] "Generic (PLEG): container finished" podID="ff4bb71d-a1e0-4bb1-8510-ee381c395f87" containerID="ff36933e55db3abb3b1c6102b104f6fe2fc53a68c103e8d2cb7a1ded0ca96883" exitCode=0 Dec 11 08:43:11 crc kubenswrapper[4992]: I1211 08:43:11.037377 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" event={"ID":"ff4bb71d-a1e0-4bb1-8510-ee381c395f87","Type":"ContainerDied","Data":"ff36933e55db3abb3b1c6102b104f6fe2fc53a68c103e8d2cb7a1ded0ca96883"} Dec 11 08:43:11 crc kubenswrapper[4992]: I1211 08:43:11.037427 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" event={"ID":"ff4bb71d-a1e0-4bb1-8510-ee381c395f87","Type":"ContainerStarted","Data":"09372a5e2ca9279d4e7ae9bde0e6a458f1a1f5dcfd76e4de54b0a30492695e98"} Dec 11 08:43:11 crc kubenswrapper[4992]: I1211 08:43:11.469321 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:43:11 crc kubenswrapper[4992]: W1211 08:43:11.469836 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eee4d38_9a13_4ee8_a7c6_993cf94560ba.slice/crio-c29f9027ad964236b9c368fa01886d9be80e0a5d62a0c475e2db61f5eaf47803 WatchSource:0}: Error finding container c29f9027ad964236b9c368fa01886d9be80e0a5d62a0c475e2db61f5eaf47803: Status 404 returned error can't find the container with id c29f9027ad964236b9c368fa01886d9be80e0a5d62a0c475e2db61f5eaf47803 Dec 11 08:43:12 crc kubenswrapper[4992]: I1211 08:43:12.017908 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-d5c6v" podUID="4c0a2193-f200-40ba-9038-68c99922f75a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Dec 11 08:43:12 crc kubenswrapper[4992]: I1211 08:43:12.050740 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eee4d38-9a13-4ee8-a7c6-993cf94560ba","Type":"ContainerStarted","Data":"c29f9027ad964236b9c368fa01886d9be80e0a5d62a0c475e2db61f5eaf47803"} Dec 11 08:43:12 crc kubenswrapper[4992]: I1211 08:43:12.053954 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" event={"ID":"ff4bb71d-a1e0-4bb1-8510-ee381c395f87","Type":"ContainerStarted","Data":"7e674cfad584638aebe91ee68571d1145ef19ae4f709c29b2ce1505d2bccc553"} Dec 11 08:43:12 crc kubenswrapper[4992]: I1211 08:43:12.055129 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:12 crc kubenswrapper[4992]: I1211 08:43:12.083830 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" podStartSLOduration=3.083808826 podStartE2EDuration="3.083808826s" podCreationTimestamp="2025-12-11 08:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:43:12.074713804 +0000 UTC m=+1216.334187730" watchObservedRunningTime="2025-12-11 08:43:12.083808826 +0000 UTC m=+1216.343282752" Dec 11 08:43:12 crc kubenswrapper[4992]: I1211 08:43:12.358814 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:43:12 crc kubenswrapper[4992]: I1211 08:43:12.443089 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:43:12 crc kubenswrapper[4992]: I1211 08:43:12.493416 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:43:12 crc kubenswrapper[4992]: W1211 08:43:12.501507 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b42a1c9_1c6d_454a_adc4_ed2135c67de9.slice/crio-0cf5795a685b502fc70289f5132c6866f58cc61a5472fc63bbf42b281e2010e5 WatchSource:0}: Error finding container 0cf5795a685b502fc70289f5132c6866f58cc61a5472fc63bbf42b281e2010e5: Status 404 returned error can't find the container with id 0cf5795a685b502fc70289f5132c6866f58cc61a5472fc63bbf42b281e2010e5 Dec 11 08:43:13 crc kubenswrapper[4992]: I1211 08:43:13.071750 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42a1c9-1c6d-454a-adc4-ed2135c67de9","Type":"ContainerStarted","Data":"0cf5795a685b502fc70289f5132c6866f58cc61a5472fc63bbf42b281e2010e5"} Dec 11 08:43:13 crc kubenswrapper[4992]: I1211 08:43:13.210336 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:43:13 crc kubenswrapper[4992]: I1211 08:43:13.210387 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:43:13 crc kubenswrapper[4992]: I1211 08:43:13.297279 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:43:13 crc kubenswrapper[4992]: I1211 08:43:13.297341 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:43:14 crc kubenswrapper[4992]: I1211 08:43:14.081453 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42a1c9-1c6d-454a-adc4-ed2135c67de9","Type":"ContainerStarted","Data":"b26c80d3ddeceecfec83cb1a0c74d740273c8d3ae0ff4db24073d85a4069b608"} Dec 11 08:43:14 crc kubenswrapper[4992]: I1211 08:43:14.083503 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eee4d38-9a13-4ee8-a7c6-993cf94560ba","Type":"ContainerStarted","Data":"72d82bd399b9d9b20926e3c79679dff20f9bb067b5fbb1dbfab1e9d566529c63"} Dec 11 08:43:15 crc kubenswrapper[4992]: I1211 08:43:15.096591 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42a1c9-1c6d-454a-adc4-ed2135c67de9","Type":"ContainerStarted","Data":"a03de1e16e2f059861f3e391dfd9a94c45ea1592078be02423a0e811825892c2"} Dec 11 08:43:15 crc kubenswrapper[4992]: I1211 08:43:15.096723 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerName="glance-log" containerID="cri-o://b26c80d3ddeceecfec83cb1a0c74d740273c8d3ae0ff4db24073d85a4069b608" gracePeriod=30 Dec 11 08:43:15 crc kubenswrapper[4992]: I1211 08:43:15.096937 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerName="glance-httpd" containerID="cri-o://a03de1e16e2f059861f3e391dfd9a94c45ea1592078be02423a0e811825892c2" gracePeriod=30 Dec 11 08:43:15 crc kubenswrapper[4992]: I1211 08:43:15.112748 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eee4d38-9a13-4ee8-a7c6-993cf94560ba","Type":"ContainerStarted","Data":"1081f8c2df1a4f71daa72be7ee33a511a9630dea62d86007798afd205267c806"} Dec 11 08:43:15 crc kubenswrapper[4992]: I1211 08:43:15.113106 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerName="glance-log" containerID="cri-o://72d82bd399b9d9b20926e3c79679dff20f9bb067b5fbb1dbfab1e9d566529c63" gracePeriod=30 Dec 11 08:43:15 crc kubenswrapper[4992]: I1211 08:43:15.113217 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerName="glance-httpd" containerID="cri-o://1081f8c2df1a4f71daa72be7ee33a511a9630dea62d86007798afd205267c806" gracePeriod=30 Dec 11 08:43:15 crc kubenswrapper[4992]: I1211 08:43:15.136229 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.136202725 podStartE2EDuration="6.136202725s" podCreationTimestamp="2025-12-11 08:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:43:15.1253828 +0000 UTC m=+1219.384856736" watchObservedRunningTime="2025-12-11 08:43:15.136202725 +0000 UTC m=+1219.395676651" Dec 11 08:43:15 crc kubenswrapper[4992]: I1211 08:43:15.155811 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.155787755 podStartE2EDuration="6.155787755s" podCreationTimestamp="2025-12-11 08:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:43:15.15272333 +0000 UTC m=+1219.412197276" watchObservedRunningTime="2025-12-11 08:43:15.155787755 +0000 UTC m=+1219.415261691" Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.168794 4992 generic.go:334] "Generic (PLEG): container finished" podID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerID="a03de1e16e2f059861f3e391dfd9a94c45ea1592078be02423a0e811825892c2" exitCode=0 Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.169388 4992 generic.go:334] "Generic (PLEG): container finished" podID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerID="b26c80d3ddeceecfec83cb1a0c74d740273c8d3ae0ff4db24073d85a4069b608" exitCode=143 Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.168874 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42a1c9-1c6d-454a-adc4-ed2135c67de9","Type":"ContainerDied","Data":"a03de1e16e2f059861f3e391dfd9a94c45ea1592078be02423a0e811825892c2"} Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.169432 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42a1c9-1c6d-454a-adc4-ed2135c67de9","Type":"ContainerDied","Data":"b26c80d3ddeceecfec83cb1a0c74d740273c8d3ae0ff4db24073d85a4069b608"} Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.174140 4992 generic.go:334] "Generic (PLEG): container finished" podID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerID="1081f8c2df1a4f71daa72be7ee33a511a9630dea62d86007798afd205267c806" exitCode=0 Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.174164 4992 generic.go:334] "Generic (PLEG): container finished" podID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerID="72d82bd399b9d9b20926e3c79679dff20f9bb067b5fbb1dbfab1e9d566529c63" exitCode=143 Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.174179 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eee4d38-9a13-4ee8-a7c6-993cf94560ba","Type":"ContainerDied","Data":"1081f8c2df1a4f71daa72be7ee33a511a9630dea62d86007798afd205267c806"} Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.174197 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eee4d38-9a13-4ee8-a7c6-993cf94560ba","Type":"ContainerDied","Data":"72d82bd399b9d9b20926e3c79679dff20f9bb067b5fbb1dbfab1e9d566529c63"} Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.618022 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.676027 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-rbwgf"] Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.676306 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="dnsmasq-dns" containerID="cri-o://db67f1215fdbac4f4de6eb3dba8fe572243a6fdc7fe6c122968a6e75a795f341" gracePeriod=10 Dec 11 08:43:19 crc kubenswrapper[4992]: I1211 08:43:19.818968 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 11 08:43:23 crc kubenswrapper[4992]: I1211 08:43:23.213935 4992 generic.go:334] "Generic (PLEG): container finished" podID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerID="db67f1215fdbac4f4de6eb3dba8fe572243a6fdc7fe6c122968a6e75a795f341" exitCode=0 Dec 11 08:43:23 crc kubenswrapper[4992]: I1211 08:43:23.214031 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" event={"ID":"d815e9b8-e583-4e4f-91cb-cc3a8f820eed","Type":"ContainerDied","Data":"db67f1215fdbac4f4de6eb3dba8fe572243a6fdc7fe6c122968a6e75a795f341"} Dec 11 08:43:23 crc kubenswrapper[4992]: I1211 08:43:23.214933 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-55cbdd6686-ddpfq" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 11 08:43:23 crc kubenswrapper[4992]: I1211 08:43:23.299974 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c6ddf9d4-c2dtv" podUID="1d82648a-9f40-4a60-8532-ec3617de1f45" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 11 08:43:24 crc kubenswrapper[4992]: I1211 08:43:24.818756 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 11 08:43:29 crc kubenswrapper[4992]: I1211 08:43:29.819059 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 11 08:43:29 crc kubenswrapper[4992]: I1211 08:43:29.820268 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:43:33 crc kubenswrapper[4992]: I1211 08:43:33.211812 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-55cbdd6686-ddpfq" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 11 08:43:33 crc kubenswrapper[4992]: I1211 08:43:33.297028 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5c6ddf9d4-c2dtv" podUID="1d82648a-9f40-4a60-8532-ec3617de1f45" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Dec 11 08:43:34 crc kubenswrapper[4992]: I1211 08:43:34.818723 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 11 08:43:39 crc kubenswrapper[4992]: I1211 08:43:39.830062 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.032865 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pgz9\" (UniqueName: \"kubernetes.io/projected/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-kube-api-access-8pgz9\") pod \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.032935 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-nb\") pod \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.032988 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-sb\") pod \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.033150 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-svc\") pod \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.033181 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-config\") pod \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.033203 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-swift-storage-0\") pod \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\" (UID: \"d815e9b8-e583-4e4f-91cb-cc3a8f820eed\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.039282 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-kube-api-access-8pgz9" (OuterVolumeSpecName: "kube-api-access-8pgz9") pod "d815e9b8-e583-4e4f-91cb-cc3a8f820eed" (UID: "d815e9b8-e583-4e4f-91cb-cc3a8f820eed"). InnerVolumeSpecName "kube-api-access-8pgz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.082143 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d815e9b8-e583-4e4f-91cb-cc3a8f820eed" (UID: "d815e9b8-e583-4e4f-91cb-cc3a8f820eed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.084944 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d815e9b8-e583-4e4f-91cb-cc3a8f820eed" (UID: "d815e9b8-e583-4e4f-91cb-cc3a8f820eed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.085596 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-config" (OuterVolumeSpecName: "config") pod "d815e9b8-e583-4e4f-91cb-cc3a8f820eed" (UID: "d815e9b8-e583-4e4f-91cb-cc3a8f820eed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.088592 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d815e9b8-e583-4e4f-91cb-cc3a8f820eed" (UID: "d815e9b8-e583-4e4f-91cb-cc3a8f820eed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.100207 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d815e9b8-e583-4e4f-91cb-cc3a8f820eed" (UID: "d815e9b8-e583-4e4f-91cb-cc3a8f820eed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.136165 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pgz9\" (UniqueName: \"kubernetes.io/projected/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-kube-api-access-8pgz9\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.136193 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.136202 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.136211 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.136221 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.136229 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d815e9b8-e583-4e4f-91cb-cc3a8f820eed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.158155 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.236754 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffcm8\" (UniqueName: \"kubernetes.io/projected/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-kube-api-access-ffcm8\") pod \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.236810 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-httpd-run\") pod \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.236831 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-scripts\") pod \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.236864 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-config-data\") pod \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.236887 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-logs\") pod \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.236950 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-combined-ca-bundle\") pod \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.236977 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\" (UID: \"6eee4d38-9a13-4ee8-a7c6-993cf94560ba\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.237566 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-logs" (OuterVolumeSpecName: "logs") pod "6eee4d38-9a13-4ee8-a7c6-993cf94560ba" (UID: "6eee4d38-9a13-4ee8-a7c6-993cf94560ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.237771 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6eee4d38-9a13-4ee8-a7c6-993cf94560ba" (UID: "6eee4d38-9a13-4ee8-a7c6-993cf94560ba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.240766 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-kube-api-access-ffcm8" (OuterVolumeSpecName: "kube-api-access-ffcm8") pod "6eee4d38-9a13-4ee8-a7c6-993cf94560ba" (UID: "6eee4d38-9a13-4ee8-a7c6-993cf94560ba"). InnerVolumeSpecName "kube-api-access-ffcm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.241076 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "6eee4d38-9a13-4ee8-a7c6-993cf94560ba" (UID: "6eee4d38-9a13-4ee8-a7c6-993cf94560ba"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.241103 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-scripts" (OuterVolumeSpecName: "scripts") pod "6eee4d38-9a13-4ee8-a7c6-993cf94560ba" (UID: "6eee4d38-9a13-4ee8-a7c6-993cf94560ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.259250 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eee4d38-9a13-4ee8-a7c6-993cf94560ba" (UID: "6eee4d38-9a13-4ee8-a7c6-993cf94560ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.281054 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-config-data" (OuterVolumeSpecName: "config-data") pod "6eee4d38-9a13-4ee8-a7c6-993cf94560ba" (UID: "6eee4d38-9a13-4ee8-a7c6-993cf94560ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: E1211 08:43:40.338566 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Dec 11 08:43:40 crc kubenswrapper[4992]: E1211 08:43:40.338867 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qv57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(65182c76-fea3-4f83-b03f-bfce48989e82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.341193 4992 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.341260 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.341274 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.341291 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.341305 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.341354 4992 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.341365 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffcm8\" (UniqueName: \"kubernetes.io/projected/6eee4d38-9a13-4ee8-a7c6-993cf94560ba-kube-api-access-ffcm8\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.368546 4992 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.392877 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6eee4d38-9a13-4ee8-a7c6-993cf94560ba","Type":"ContainerDied","Data":"c29f9027ad964236b9c368fa01886d9be80e0a5d62a0c475e2db61f5eaf47803"} Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.392908 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.392940 4992 scope.go:117] "RemoveContainer" containerID="1081f8c2df1a4f71daa72be7ee33a511a9630dea62d86007798afd205267c806" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.395480 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" event={"ID":"d815e9b8-e583-4e4f-91cb-cc3a8f820eed","Type":"ContainerDied","Data":"a0d0f426021ac08a5a34f3768ff0be36ba308eb0af4481d79e5b9aee4e794128"} Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.395569 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.425998 4992 scope.go:117] "RemoveContainer" containerID="72d82bd399b9d9b20926e3c79679dff20f9bb067b5fbb1dbfab1e9d566529c63" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.433838 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-rbwgf"] Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.442950 4992 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.443326 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-rbwgf"] Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.450883 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.459241 4992 scope.go:117] "RemoveContainer" containerID="db67f1215fdbac4f4de6eb3dba8fe572243a6fdc7fe6c122968a6e75a795f341" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.471358 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.475606 4992 scope.go:117] "RemoveContainer" containerID="9b94d824ce485c2d18fdb524cbfe2729238a291857d2b8d4ab09dab5e40ffd55" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.481763 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:43:40 crc kubenswrapper[4992]: E1211 08:43:40.482183 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="init" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.482201 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="init" Dec 11 08:43:40 crc kubenswrapper[4992]: E1211 08:43:40.482209 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerName="glance-log" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.482216 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerName="glance-log" Dec 11 08:43:40 crc kubenswrapper[4992]: E1211 08:43:40.482241 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="dnsmasq-dns" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.482248 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="dnsmasq-dns" Dec 11 08:43:40 crc kubenswrapper[4992]: E1211 08:43:40.482258 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerName="glance-httpd" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.482266 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerName="glance-httpd" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.482462 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerName="glance-httpd" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.482488 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="dnsmasq-dns" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.482504 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" containerName="glance-log" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.484688 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.488527 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.490315 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.493543 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.645947 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-logs\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.646032 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.646064 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-config-data\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.646124 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.646147 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.646180 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-scripts\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.646207 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbwgp\" (UniqueName: \"kubernetes.io/projected/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-kube-api-access-fbwgp\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.646244 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.728610 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.747502 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.747549 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.747576 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-scripts\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.747598 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbwgp\" (UniqueName: \"kubernetes.io/projected/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-kube-api-access-fbwgp\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.747626 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.747744 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-logs\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.747829 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.747856 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-config-data\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.747895 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.753685 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.753726 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-logs\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.757084 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-config-data\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.773129 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.774330 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-scripts\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.775651 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.784048 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbwgp\" (UniqueName: \"kubernetes.io/projected/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-kube-api-access-fbwgp\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.787409 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.818936 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.850066 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-config-data\") pod \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.850162 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-httpd-run\") pod \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.850302 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-logs\") pod \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.850349 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.850474 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-combined-ca-bundle\") pod \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.850538 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-scripts\") pod \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.850655 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbzwf\" (UniqueName: \"kubernetes.io/projected/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-kube-api-access-gbzwf\") pod \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\" (UID: \"3b42a1c9-1c6d-454a-adc4-ed2135c67de9\") " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.850773 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3b42a1c9-1c6d-454a-adc4-ed2135c67de9" (UID: "3b42a1c9-1c6d-454a-adc4-ed2135c67de9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.851109 4992 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.851283 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-logs" (OuterVolumeSpecName: "logs") pod "3b42a1c9-1c6d-454a-adc4-ed2135c67de9" (UID: "3b42a1c9-1c6d-454a-adc4-ed2135c67de9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.856395 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "3b42a1c9-1c6d-454a-adc4-ed2135c67de9" (UID: "3b42a1c9-1c6d-454a-adc4-ed2135c67de9"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.857988 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-kube-api-access-gbzwf" (OuterVolumeSpecName: "kube-api-access-gbzwf") pod "3b42a1c9-1c6d-454a-adc4-ed2135c67de9" (UID: "3b42a1c9-1c6d-454a-adc4-ed2135c67de9"). InnerVolumeSpecName "kube-api-access-gbzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.858546 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-scripts" (OuterVolumeSpecName: "scripts") pod "3b42a1c9-1c6d-454a-adc4-ed2135c67de9" (UID: "3b42a1c9-1c6d-454a-adc4-ed2135c67de9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.888486 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b42a1c9-1c6d-454a-adc4-ed2135c67de9" (UID: "3b42a1c9-1c6d-454a-adc4-ed2135c67de9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.906755 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-config-data" (OuterVolumeSpecName: "config-data") pod "3b42a1c9-1c6d-454a-adc4-ed2135c67de9" (UID: "3b42a1c9-1c6d-454a-adc4-ed2135c67de9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.954428 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.954486 4992 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.954497 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.954509 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.954517 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbzwf\" (UniqueName: \"kubernetes.io/projected/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-kube-api-access-gbzwf\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.954526 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b42a1c9-1c6d-454a-adc4-ed2135c67de9-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:40 crc kubenswrapper[4992]: I1211 08:43:40.997019 4992 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.055610 4992 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.332234 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.404115 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb31451d-5ece-4d9a-a6ad-b781668ecbdb","Type":"ContainerStarted","Data":"87636abfeccd6213d94c7148eadd2d8293e8c02410df87a6b3c0d813396765f0"} Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.406590 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3b42a1c9-1c6d-454a-adc4-ed2135c67de9","Type":"ContainerDied","Data":"0cf5795a685b502fc70289f5132c6866f58cc61a5472fc63bbf42b281e2010e5"} Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.406646 4992 scope.go:117] "RemoveContainer" containerID="a03de1e16e2f059861f3e391dfd9a94c45ea1592078be02423a0e811825892c2" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.406708 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.442024 4992 scope.go:117] "RemoveContainer" containerID="b26c80d3ddeceecfec83cb1a0c74d740273c8d3ae0ff4db24073d85a4069b608" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.444378 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.452581 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.480957 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:43:41 crc kubenswrapper[4992]: E1211 08:43:41.481346 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerName="glance-httpd" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.481368 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerName="glance-httpd" Dec 11 08:43:41 crc kubenswrapper[4992]: E1211 08:43:41.481406 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerName="glance-log" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.481415 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerName="glance-log" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.481595 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerName="glance-httpd" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.481620 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" containerName="glance-log" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.482532 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.484456 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.485154 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.494740 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.666859 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.666915 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.666948 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.667101 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.667166 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.667201 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.667280 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djbkg\" (UniqueName: \"kubernetes.io/projected/fd74de0b-d6c0-4892-befc-cd81b18a63ad-kube-api-access-djbkg\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.667405 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.768948 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.769038 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.769068 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.769095 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.769142 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.769174 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.769201 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.769247 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djbkg\" (UniqueName: \"kubernetes.io/projected/fd74de0b-d6c0-4892-befc-cd81b18a63ad-kube-api-access-djbkg\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.769699 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.769733 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.770025 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.774887 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.775691 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.775827 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.779269 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.789054 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djbkg\" (UniqueName: \"kubernetes.io/projected/fd74de0b-d6c0-4892-befc-cd81b18a63ad-kube-api-access-djbkg\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.797354 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:43:41 crc kubenswrapper[4992]: I1211 08:43:41.814016 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:42 crc kubenswrapper[4992]: I1211 08:43:42.104511 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b42a1c9-1c6d-454a-adc4-ed2135c67de9" path="/var/lib/kubelet/pods/3b42a1c9-1c6d-454a-adc4-ed2135c67de9/volumes" Dec 11 08:43:42 crc kubenswrapper[4992]: I1211 08:43:42.105406 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eee4d38-9a13-4ee8-a7c6-993cf94560ba" path="/var/lib/kubelet/pods/6eee4d38-9a13-4ee8-a7c6-993cf94560ba/volumes" Dec 11 08:43:42 crc kubenswrapper[4992]: I1211 08:43:42.106214 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" path="/var/lib/kubelet/pods/d815e9b8-e583-4e4f-91cb-cc3a8f820eed/volumes" Dec 11 08:43:42 crc kubenswrapper[4992]: I1211 08:43:42.325365 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:43:42 crc kubenswrapper[4992]: W1211 08:43:42.332676 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd74de0b_d6c0_4892_befc_cd81b18a63ad.slice/crio-ac9cf8286ffb3b087d03f0fae8c271412d82262c273ee09222a181e447a55be2 WatchSource:0}: Error finding container ac9cf8286ffb3b087d03f0fae8c271412d82262c273ee09222a181e447a55be2: Status 404 returned error can't find the container with id ac9cf8286ffb3b087d03f0fae8c271412d82262c273ee09222a181e447a55be2 Dec 11 08:43:42 crc kubenswrapper[4992]: I1211 08:43:42.420941 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd74de0b-d6c0-4892-befc-cd81b18a63ad","Type":"ContainerStarted","Data":"ac9cf8286ffb3b087d03f0fae8c271412d82262c273ee09222a181e447a55be2"} Dec 11 08:43:44 crc kubenswrapper[4992]: I1211 08:43:44.820007 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76fcf4b695-rbwgf" podUID="d815e9b8-e583-4e4f-91cb-cc3a8f820eed" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Dec 11 08:43:44 crc kubenswrapper[4992]: I1211 08:43:44.989748 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:43:45 crc kubenswrapper[4992]: I1211 08:43:45.448819 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd74de0b-d6c0-4892-befc-cd81b18a63ad","Type":"ContainerStarted","Data":"0701fd4db1efc8c804ad56c7e9b57ca640a87b64838528c17632b9a37c920e05"} Dec 11 08:43:45 crc kubenswrapper[4992]: I1211 08:43:45.453845 4992 generic.go:334] "Generic (PLEG): container finished" podID="6952512b-7da5-4bc5-b91f-bbeb61056854" containerID="4f96a2f0ff0421ec2021b5ff15e71261702f804f2987ff43bb3f347dabb4dc10" exitCode=0 Dec 11 08:43:45 crc kubenswrapper[4992]: I1211 08:43:45.453925 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-smb2h" event={"ID":"6952512b-7da5-4bc5-b91f-bbeb61056854","Type":"ContainerDied","Data":"4f96a2f0ff0421ec2021b5ff15e71261702f804f2987ff43bb3f347dabb4dc10"} Dec 11 08:43:45 crc kubenswrapper[4992]: I1211 08:43:45.460682 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v95pn" event={"ID":"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b","Type":"ContainerStarted","Data":"ce590d6db83eb7bb2add34306fa84d77454d77c15226f1f7e94828e6912c04da"} Dec 11 08:43:45 crc kubenswrapper[4992]: I1211 08:43:45.463953 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tcqkm" event={"ID":"f8231c59-b8e7-4f7d-aeb0-888d579425ac","Type":"ContainerStarted","Data":"68e8663a144f38c29f8213a0711f49a3ad8430c222c5d0d7252ccb15fca0fc45"} Dec 11 08:43:45 crc kubenswrapper[4992]: I1211 08:43:45.473001 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb31451d-5ece-4d9a-a6ad-b781668ecbdb","Type":"ContainerStarted","Data":"9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047"} Dec 11 08:43:45 crc kubenswrapper[4992]: I1211 08:43:45.497397 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-v95pn" podStartSLOduration=2.940712963 podStartE2EDuration="1m12.497367742s" podCreationTimestamp="2025-12-11 08:42:33 +0000 UTC" firstStartedPulling="2025-12-11 08:42:35.334269227 +0000 UTC m=+1179.593743153" lastFinishedPulling="2025-12-11 08:43:44.890924006 +0000 UTC m=+1249.150397932" observedRunningTime="2025-12-11 08:43:45.488060164 +0000 UTC m=+1249.747534090" watchObservedRunningTime="2025-12-11 08:43:45.497367742 +0000 UTC m=+1249.756841668" Dec 11 08:43:45 crc kubenswrapper[4992]: I1211 08:43:45.498314 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:43:45 crc kubenswrapper[4992]: I1211 08:43:45.514110 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-tcqkm" podStartSLOduration=2.175270202 podStartE2EDuration="1m11.514081551s" podCreationTimestamp="2025-12-11 08:42:34 +0000 UTC" firstStartedPulling="2025-12-11 08:42:35.530915357 +0000 UTC m=+1179.790389283" lastFinishedPulling="2025-12-11 08:43:44.869726706 +0000 UTC m=+1249.129200632" observedRunningTime="2025-12-11 08:43:45.504792164 +0000 UTC m=+1249.764266110" watchObservedRunningTime="2025-12-11 08:43:45.514081551 +0000 UTC m=+1249.773555477" Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.487038 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8mmcr" event={"ID":"73c99101-825a-4a3b-acf0-7fc522f3631f","Type":"ContainerStarted","Data":"ae4ff2d1d3811f0275fe7a48683a0f8b28982a4aeedd7083fc8e86a3c3c00288"} Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.494799 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb31451d-5ece-4d9a-a6ad-b781668ecbdb","Type":"ContainerStarted","Data":"5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f"} Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.498934 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd74de0b-d6c0-4892-befc-cd81b18a63ad","Type":"ContainerStarted","Data":"f2ef47261e2fb1871d241e57ce9e1cb95d90b4e3cefc4ce17773c1642c13c61c"} Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.517976 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-8mmcr" podStartSLOduration=3.746625983 podStartE2EDuration="1m13.517958534s" podCreationTimestamp="2025-12-11 08:42:33 +0000 UTC" firstStartedPulling="2025-12-11 08:42:35.116719294 +0000 UTC m=+1179.376193220" lastFinishedPulling="2025-12-11 08:43:44.888051845 +0000 UTC m=+1249.147525771" observedRunningTime="2025-12-11 08:43:46.514920369 +0000 UTC m=+1250.774394295" watchObservedRunningTime="2025-12-11 08:43:46.517958534 +0000 UTC m=+1250.777432460" Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.546529 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.546503063 podStartE2EDuration="5.546503063s" podCreationTimestamp="2025-12-11 08:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:43:46.535022222 +0000 UTC m=+1250.794496158" watchObservedRunningTime="2025-12-11 08:43:46.546503063 +0000 UTC m=+1250.805976989" Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.565547 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.565525719 podStartE2EDuration="6.565525719s" podCreationTimestamp="2025-12-11 08:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:43:46.555736849 +0000 UTC m=+1250.815210785" watchObservedRunningTime="2025-12-11 08:43:46.565525719 +0000 UTC m=+1250.824999645" Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.882876 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.974737 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-fernet-keys\") pod \"6952512b-7da5-4bc5-b91f-bbeb61056854\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.974882 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-combined-ca-bundle\") pod \"6952512b-7da5-4bc5-b91f-bbeb61056854\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.974941 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smtvd\" (UniqueName: \"kubernetes.io/projected/6952512b-7da5-4bc5-b91f-bbeb61056854-kube-api-access-smtvd\") pod \"6952512b-7da5-4bc5-b91f-bbeb61056854\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.975006 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-config-data\") pod \"6952512b-7da5-4bc5-b91f-bbeb61056854\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.975066 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-credential-keys\") pod \"6952512b-7da5-4bc5-b91f-bbeb61056854\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.975128 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-scripts\") pod \"6952512b-7da5-4bc5-b91f-bbeb61056854\" (UID: \"6952512b-7da5-4bc5-b91f-bbeb61056854\") " Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.983399 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6952512b-7da5-4bc5-b91f-bbeb61056854" (UID: "6952512b-7da5-4bc5-b91f-bbeb61056854"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.983685 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-scripts" (OuterVolumeSpecName: "scripts") pod "6952512b-7da5-4bc5-b91f-bbeb61056854" (UID: "6952512b-7da5-4bc5-b91f-bbeb61056854"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.983724 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6952512b-7da5-4bc5-b91f-bbeb61056854-kube-api-access-smtvd" (OuterVolumeSpecName: "kube-api-access-smtvd") pod "6952512b-7da5-4bc5-b91f-bbeb61056854" (UID: "6952512b-7da5-4bc5-b91f-bbeb61056854"). InnerVolumeSpecName "kube-api-access-smtvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:43:46 crc kubenswrapper[4992]: I1211 08:43:46.983754 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6952512b-7da5-4bc5-b91f-bbeb61056854" (UID: "6952512b-7da5-4bc5-b91f-bbeb61056854"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.002980 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-config-data" (OuterVolumeSpecName: "config-data") pod "6952512b-7da5-4bc5-b91f-bbeb61056854" (UID: "6952512b-7da5-4bc5-b91f-bbeb61056854"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.017621 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6952512b-7da5-4bc5-b91f-bbeb61056854" (UID: "6952512b-7da5-4bc5-b91f-bbeb61056854"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.078512 4992 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.078550 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.078563 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smtvd\" (UniqueName: \"kubernetes.io/projected/6952512b-7da5-4bc5-b91f-bbeb61056854-kube-api-access-smtvd\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.078573 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.078581 4992 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.078589 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6952512b-7da5-4bc5-b91f-bbeb61056854-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.149495 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.396494 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5c6ddf9d4-c2dtv" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.456669 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55cbdd6686-ddpfq"] Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.511703 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-smb2h" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.512813 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-smb2h" event={"ID":"6952512b-7da5-4bc5-b91f-bbeb61056854","Type":"ContainerDied","Data":"60fa9a9a4109f9ce10208231f2db1a0b64dd38e67699a38ad590f26fc9dba543"} Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.512857 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fa9a9a4109f9ce10208231f2db1a0b64dd38e67699a38ad590f26fc9dba543" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.512993 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55cbdd6686-ddpfq" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon-log" containerID="cri-o://d65b0fba99de8e51ce552bd8878515eec8d287c8f746937db41d8b1365b67a0e" gracePeriod=30 Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.513098 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55cbdd6686-ddpfq" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon" containerID="cri-o://271c57eec089769bbb4ceebf12e5b27ed20fe1df01b7ed3ff394d432f49782ec" gracePeriod=30 Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.606902 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7ccd8c54fd-6rk8g"] Dec 11 08:43:47 crc kubenswrapper[4992]: E1211 08:43:47.607365 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6952512b-7da5-4bc5-b91f-bbeb61056854" containerName="keystone-bootstrap" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.607389 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6952512b-7da5-4bc5-b91f-bbeb61056854" containerName="keystone-bootstrap" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.607611 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6952512b-7da5-4bc5-b91f-bbeb61056854" containerName="keystone-bootstrap" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.608337 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.611980 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.612036 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.612307 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.612405 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.613051 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xs4g8" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.615140 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.622032 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7ccd8c54fd-6rk8g"] Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.690653 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-public-tls-certs\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.690715 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-config-data\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.690884 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-combined-ca-bundle\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.690968 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-fernet-keys\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.691005 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-scripts\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.691076 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-internal-tls-certs\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.691110 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfhnj\" (UniqueName: \"kubernetes.io/projected/10274b54-502d-49df-a610-a6b7cddcce42-kube-api-access-gfhnj\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.691231 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-credential-keys\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.792540 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-fernet-keys\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.792917 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-scripts\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.792961 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-internal-tls-certs\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.792990 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfhnj\" (UniqueName: \"kubernetes.io/projected/10274b54-502d-49df-a610-a6b7cddcce42-kube-api-access-gfhnj\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.793051 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-credential-keys\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.793113 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-public-tls-certs\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.793145 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-config-data\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.793194 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-combined-ca-bundle\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.796726 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-credential-keys\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.797263 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-internal-tls-certs\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.798116 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-config-data\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.799085 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-combined-ca-bundle\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.802024 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-fernet-keys\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.810787 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-scripts\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.813127 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/10274b54-502d-49df-a610-a6b7cddcce42-public-tls-certs\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.816497 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfhnj\" (UniqueName: \"kubernetes.io/projected/10274b54-502d-49df-a610-a6b7cddcce42-kube-api-access-gfhnj\") pod \"keystone-7ccd8c54fd-6rk8g\" (UID: \"10274b54-502d-49df-a610-a6b7cddcce42\") " pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:47 crc kubenswrapper[4992]: I1211 08:43:47.929803 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:48 crc kubenswrapper[4992]: I1211 08:43:48.855972 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7ccd8c54fd-6rk8g"] Dec 11 08:43:49 crc kubenswrapper[4992]: I1211 08:43:49.556295 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7ccd8c54fd-6rk8g" event={"ID":"10274b54-502d-49df-a610-a6b7cddcce42","Type":"ContainerStarted","Data":"0d5f014f86704d6bf9a5d0c1217518e5063be27f33a19460a7ab92238ffaa7cd"} Dec 11 08:43:50 crc kubenswrapper[4992]: I1211 08:43:50.565780 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7ccd8c54fd-6rk8g" event={"ID":"10274b54-502d-49df-a610-a6b7cddcce42","Type":"ContainerStarted","Data":"37f4328440d86a5d60ebeb106c2e4fca92382dacb26e52d332deb91c196c0fda"} Dec 11 08:43:50 crc kubenswrapper[4992]: I1211 08:43:50.566166 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:43:50 crc kubenswrapper[4992]: I1211 08:43:50.593316 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7ccd8c54fd-6rk8g" podStartSLOduration=3.593297649 podStartE2EDuration="3.593297649s" podCreationTimestamp="2025-12-11 08:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:43:50.584992295 +0000 UTC m=+1254.844466241" watchObservedRunningTime="2025-12-11 08:43:50.593297649 +0000 UTC m=+1254.852771575" Dec 11 08:43:50 crc kubenswrapper[4992]: I1211 08:43:50.811676 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 08:43:50 crc kubenswrapper[4992]: I1211 08:43:50.811729 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 08:43:50 crc kubenswrapper[4992]: I1211 08:43:50.859526 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 08:43:50 crc kubenswrapper[4992]: I1211 08:43:50.870449 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 08:43:51 crc kubenswrapper[4992]: I1211 08:43:51.573287 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 08:43:51 crc kubenswrapper[4992]: I1211 08:43:51.573335 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 08:43:51 crc kubenswrapper[4992]: I1211 08:43:51.814390 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:51 crc kubenswrapper[4992]: I1211 08:43:51.814442 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:51 crc kubenswrapper[4992]: I1211 08:43:51.845813 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:51 crc kubenswrapper[4992]: I1211 08:43:51.861285 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:52 crc kubenswrapper[4992]: I1211 08:43:52.584496 4992 generic.go:334] "Generic (PLEG): container finished" podID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerID="271c57eec089769bbb4ceebf12e5b27ed20fe1df01b7ed3ff394d432f49782ec" exitCode=0 Dec 11 08:43:52 crc kubenswrapper[4992]: I1211 08:43:52.584527 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cbdd6686-ddpfq" event={"ID":"4499ae00-40e9-4f82-a285-b4962cbc3c61","Type":"ContainerDied","Data":"271c57eec089769bbb4ceebf12e5b27ed20fe1df01b7ed3ff394d432f49782ec"} Dec 11 08:43:52 crc kubenswrapper[4992]: I1211 08:43:52.585251 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:52 crc kubenswrapper[4992]: I1211 08:43:52.585283 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:53 crc kubenswrapper[4992]: I1211 08:43:53.210700 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55cbdd6686-ddpfq" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 11 08:43:53 crc kubenswrapper[4992]: I1211 08:43:53.958309 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 08:43:53 crc kubenswrapper[4992]: I1211 08:43:53.958418 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:43:53 crc kubenswrapper[4992]: I1211 08:43:53.959422 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 08:43:54 crc kubenswrapper[4992]: I1211 08:43:54.722305 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 08:43:54 crc kubenswrapper[4992]: I1211 08:43:54.722466 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:43:54 crc kubenswrapper[4992]: I1211 08:43:54.723856 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 08:44:02 crc kubenswrapper[4992]: E1211 08:44:02.320460 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 11 08:44:02 crc kubenswrapper[4992]: E1211 08:44:02.321290 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qv57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(65182c76-fea3-4f83-b03f-bfce48989e82): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 08:44:02 crc kubenswrapper[4992]: E1211 08:44:02.323662 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="65182c76-fea3-4f83-b03f-bfce48989e82" Dec 11 08:44:02 crc kubenswrapper[4992]: I1211 08:44:02.677565 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="65182c76-fea3-4f83-b03f-bfce48989e82" containerName="ceilometer-notification-agent" containerID="cri-o://b2d3e4d2e363ce847d275f8c9d4147ea923added3a98b3531764f735c952f8a3" gracePeriod=30 Dec 11 08:44:03 crc kubenswrapper[4992]: I1211 08:44:03.211344 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55cbdd6686-ddpfq" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.708032 4992 generic.go:334] "Generic (PLEG): container finished" podID="65182c76-fea3-4f83-b03f-bfce48989e82" containerID="b2d3e4d2e363ce847d275f8c9d4147ea923added3a98b3531764f735c952f8a3" exitCode=0 Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.708114 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65182c76-fea3-4f83-b03f-bfce48989e82","Type":"ContainerDied","Data":"b2d3e4d2e363ce847d275f8c9d4147ea923added3a98b3531764f735c952f8a3"} Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.785847 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.935534 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-config-data\") pod \"65182c76-fea3-4f83-b03f-bfce48989e82\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.935660 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-run-httpd\") pod \"65182c76-fea3-4f83-b03f-bfce48989e82\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.935853 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qv57\" (UniqueName: \"kubernetes.io/projected/65182c76-fea3-4f83-b03f-bfce48989e82-kube-api-access-7qv57\") pod \"65182c76-fea3-4f83-b03f-bfce48989e82\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.935889 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-sg-core-conf-yaml\") pod \"65182c76-fea3-4f83-b03f-bfce48989e82\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.935957 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-log-httpd\") pod \"65182c76-fea3-4f83-b03f-bfce48989e82\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.935983 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-combined-ca-bundle\") pod \"65182c76-fea3-4f83-b03f-bfce48989e82\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.936007 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-scripts\") pod \"65182c76-fea3-4f83-b03f-bfce48989e82\" (UID: \"65182c76-fea3-4f83-b03f-bfce48989e82\") " Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.938077 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "65182c76-fea3-4f83-b03f-bfce48989e82" (UID: "65182c76-fea3-4f83-b03f-bfce48989e82"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.938780 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "65182c76-fea3-4f83-b03f-bfce48989e82" (UID: "65182c76-fea3-4f83-b03f-bfce48989e82"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.943358 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "65182c76-fea3-4f83-b03f-bfce48989e82" (UID: "65182c76-fea3-4f83-b03f-bfce48989e82"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.947194 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65182c76-fea3-4f83-b03f-bfce48989e82-kube-api-access-7qv57" (OuterVolumeSpecName: "kube-api-access-7qv57") pod "65182c76-fea3-4f83-b03f-bfce48989e82" (UID: "65182c76-fea3-4f83-b03f-bfce48989e82"). InnerVolumeSpecName "kube-api-access-7qv57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.948013 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-scripts" (OuterVolumeSpecName: "scripts") pod "65182c76-fea3-4f83-b03f-bfce48989e82" (UID: "65182c76-fea3-4f83-b03f-bfce48989e82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.967978 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65182c76-fea3-4f83-b03f-bfce48989e82" (UID: "65182c76-fea3-4f83-b03f-bfce48989e82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:05 crc kubenswrapper[4992]: I1211 08:44:05.989575 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-config-data" (OuterVolumeSpecName: "config-data") pod "65182c76-fea3-4f83-b03f-bfce48989e82" (UID: "65182c76-fea3-4f83-b03f-bfce48989e82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.039910 4992 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.039938 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.039949 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.039957 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.039965 4992 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/65182c76-fea3-4f83-b03f-bfce48989e82-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.039974 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qv57\" (UniqueName: \"kubernetes.io/projected/65182c76-fea3-4f83-b03f-bfce48989e82-kube-api-access-7qv57\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.039983 4992 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/65182c76-fea3-4f83-b03f-bfce48989e82-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.723375 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"65182c76-fea3-4f83-b03f-bfce48989e82","Type":"ContainerDied","Data":"06f547eef2b98629473d5e712053bcc2aeaaf58a6ef098fde21f051966e188e7"} Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.723439 4992 scope.go:117] "RemoveContainer" containerID="b2d3e4d2e363ce847d275f8c9d4147ea923added3a98b3531764f735c952f8a3" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.723616 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.774735 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.781205 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.805955 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:06 crc kubenswrapper[4992]: E1211 08:44:06.806330 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65182c76-fea3-4f83-b03f-bfce48989e82" containerName="ceilometer-notification-agent" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.806345 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="65182c76-fea3-4f83-b03f-bfce48989e82" containerName="ceilometer-notification-agent" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.806682 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="65182c76-fea3-4f83-b03f-bfce48989e82" containerName="ceilometer-notification-agent" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.810430 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.812623 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.812831 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.823506 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.955157 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgl6v\" (UniqueName: \"kubernetes.io/projected/3483a09f-bb6f-470f-9485-3241dd60a448-kube-api-access-vgl6v\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.955228 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-log-httpd\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.955270 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-scripts\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.955303 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-config-data\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.955511 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.955554 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-run-httpd\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:06 crc kubenswrapper[4992]: I1211 08:44:06.955596 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.057857 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.057906 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-run-httpd\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.057931 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.058029 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgl6v\" (UniqueName: \"kubernetes.io/projected/3483a09f-bb6f-470f-9485-3241dd60a448-kube-api-access-vgl6v\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.058086 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-log-httpd\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.058139 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-scripts\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.058179 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-config-data\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.058609 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-run-httpd\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.059022 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-log-httpd\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.064077 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.065019 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-config-data\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.065126 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.066036 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-scripts\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.084123 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgl6v\" (UniqueName: \"kubernetes.io/projected/3483a09f-bb6f-470f-9485-3241dd60a448-kube-api-access-vgl6v\") pod \"ceilometer-0\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.139062 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.430236 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:07 crc kubenswrapper[4992]: W1211 08:44:07.433878 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3483a09f_bb6f_470f_9485_3241dd60a448.slice/crio-e674f6911babdec5dbb64c46e01fe5cdc3db0235aaa1844b24653a6ed681520b WatchSource:0}: Error finding container e674f6911babdec5dbb64c46e01fe5cdc3db0235aaa1844b24653a6ed681520b: Status 404 returned error can't find the container with id e674f6911babdec5dbb64c46e01fe5cdc3db0235aaa1844b24653a6ed681520b Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.437158 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 08:44:07 crc kubenswrapper[4992]: I1211 08:44:07.734993 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerStarted","Data":"e674f6911babdec5dbb64c46e01fe5cdc3db0235aaa1844b24653a6ed681520b"} Dec 11 08:44:08 crc kubenswrapper[4992]: I1211 08:44:08.105699 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65182c76-fea3-4f83-b03f-bfce48989e82" path="/var/lib/kubelet/pods/65182c76-fea3-4f83-b03f-bfce48989e82/volumes" Dec 11 08:44:08 crc kubenswrapper[4992]: I1211 08:44:08.745706 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerStarted","Data":"33a790dfdb73d816a315e25db1506c2e0bc4433cf2a7c44400b12f65f7afafbb"} Dec 11 08:44:10 crc kubenswrapper[4992]: I1211 08:44:10.763697 4992 generic.go:334] "Generic (PLEG): container finished" podID="f8231c59-b8e7-4f7d-aeb0-888d579425ac" containerID="68e8663a144f38c29f8213a0711f49a3ad8430c222c5d0d7252ccb15fca0fc45" exitCode=0 Dec 11 08:44:10 crc kubenswrapper[4992]: I1211 08:44:10.763806 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tcqkm" event={"ID":"f8231c59-b8e7-4f7d-aeb0-888d579425ac","Type":"ContainerDied","Data":"68e8663a144f38c29f8213a0711f49a3ad8430c222c5d0d7252ccb15fca0fc45"} Dec 11 08:44:10 crc kubenswrapper[4992]: I1211 08:44:10.768719 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerStarted","Data":"366e04509132822b09c09069ef03eec89dbaf165e106c581c1949254fad33501"} Dec 11 08:44:11 crc kubenswrapper[4992]: I1211 08:44:11.778861 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerStarted","Data":"e70914344f697e3a0b2023c5175f2a5257551e372d0ab94024320dc6c33e2083"} Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.127702 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tcqkm" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.265055 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-scripts\") pod \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.265140 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-config-data\") pod \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.265324 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-combined-ca-bundle\") pod \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.265360 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkt9g\" (UniqueName: \"kubernetes.io/projected/f8231c59-b8e7-4f7d-aeb0-888d579425ac-kube-api-access-hkt9g\") pod \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.265407 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8231c59-b8e7-4f7d-aeb0-888d579425ac-logs\") pod \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\" (UID: \"f8231c59-b8e7-4f7d-aeb0-888d579425ac\") " Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.267156 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8231c59-b8e7-4f7d-aeb0-888d579425ac-logs" (OuterVolumeSpecName: "logs") pod "f8231c59-b8e7-4f7d-aeb0-888d579425ac" (UID: "f8231c59-b8e7-4f7d-aeb0-888d579425ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.279007 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-scripts" (OuterVolumeSpecName: "scripts") pod "f8231c59-b8e7-4f7d-aeb0-888d579425ac" (UID: "f8231c59-b8e7-4f7d-aeb0-888d579425ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.279211 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8231c59-b8e7-4f7d-aeb0-888d579425ac-kube-api-access-hkt9g" (OuterVolumeSpecName: "kube-api-access-hkt9g") pod "f8231c59-b8e7-4f7d-aeb0-888d579425ac" (UID: "f8231c59-b8e7-4f7d-aeb0-888d579425ac"). InnerVolumeSpecName "kube-api-access-hkt9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.306246 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8231c59-b8e7-4f7d-aeb0-888d579425ac" (UID: "f8231c59-b8e7-4f7d-aeb0-888d579425ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.306365 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-config-data" (OuterVolumeSpecName: "config-data") pod "f8231c59-b8e7-4f7d-aeb0-888d579425ac" (UID: "f8231c59-b8e7-4f7d-aeb0-888d579425ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.366850 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.366884 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkt9g\" (UniqueName: \"kubernetes.io/projected/f8231c59-b8e7-4f7d-aeb0-888d579425ac-kube-api-access-hkt9g\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.366895 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8231c59-b8e7-4f7d-aeb0-888d579425ac-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.366904 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.366912 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8231c59-b8e7-4f7d-aeb0-888d579425ac-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.795185 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-tcqkm" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.811089 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-tcqkm" event={"ID":"f8231c59-b8e7-4f7d-aeb0-888d579425ac","Type":"ContainerDied","Data":"dec07bbedb97eb11a289c8ed382c2e3903633c6761ea1dcaef1b86485f29afa7"} Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.811198 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec07bbedb97eb11a289c8ed382c2e3903633c6761ea1dcaef1b86485f29afa7" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.991821 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-74db564d44-wj6gh"] Dec 11 08:44:12 crc kubenswrapper[4992]: E1211 08:44:12.992182 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8231c59-b8e7-4f7d-aeb0-888d579425ac" containerName="placement-db-sync" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.992200 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8231c59-b8e7-4f7d-aeb0-888d579425ac" containerName="placement-db-sync" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.992383 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8231c59-b8e7-4f7d-aeb0-888d579425ac" containerName="placement-db-sync" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.993682 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.995852 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.996170 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.996343 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2xdtn" Dec 11 08:44:12 crc kubenswrapper[4992]: I1211 08:44:12.996536 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.000728 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.022765 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74db564d44-wj6gh"] Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.081719 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-combined-ca-bundle\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.081876 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-474xc\" (UniqueName: \"kubernetes.io/projected/1215b406-66dc-4132-a0ea-76010ee7b44d-kube-api-access-474xc\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.081912 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-internal-tls-certs\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.082057 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-config-data\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.082197 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1215b406-66dc-4132-a0ea-76010ee7b44d-logs\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.082418 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-scripts\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.082517 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-public-tls-certs\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.184508 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-474xc\" (UniqueName: \"kubernetes.io/projected/1215b406-66dc-4132-a0ea-76010ee7b44d-kube-api-access-474xc\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.184570 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-internal-tls-certs\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.184650 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-config-data\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.184681 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1215b406-66dc-4132-a0ea-76010ee7b44d-logs\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.184733 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-scripts\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.184760 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-public-tls-certs\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.184784 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-combined-ca-bundle\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.185528 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1215b406-66dc-4132-a0ea-76010ee7b44d-logs\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.190071 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-combined-ca-bundle\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.190372 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-config-data\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.190712 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-scripts\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.191761 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-internal-tls-certs\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.193526 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1215b406-66dc-4132-a0ea-76010ee7b44d-public-tls-certs\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.210531 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55cbdd6686-ddpfq" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.210658 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.227357 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-474xc\" (UniqueName: \"kubernetes.io/projected/1215b406-66dc-4132-a0ea-76010ee7b44d-kube-api-access-474xc\") pod \"placement-74db564d44-wj6gh\" (UID: \"1215b406-66dc-4132-a0ea-76010ee7b44d\") " pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:13 crc kubenswrapper[4992]: I1211 08:44:13.315864 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:14 crc kubenswrapper[4992]: I1211 08:44:14.336444 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74db564d44-wj6gh"] Dec 11 08:44:14 crc kubenswrapper[4992]: W1211 08:44:14.344290 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1215b406_66dc_4132_a0ea_76010ee7b44d.slice/crio-a8a1ff4cf4b150e866c78e2e83d99a1a71e3926678eebc5ae18028b037bc2d3a WatchSource:0}: Error finding container a8a1ff4cf4b150e866c78e2e83d99a1a71e3926678eebc5ae18028b037bc2d3a: Status 404 returned error can't find the container with id a8a1ff4cf4b150e866c78e2e83d99a1a71e3926678eebc5ae18028b037bc2d3a Dec 11 08:44:14 crc kubenswrapper[4992]: I1211 08:44:14.813301 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74db564d44-wj6gh" event={"ID":"1215b406-66dc-4132-a0ea-76010ee7b44d","Type":"ContainerStarted","Data":"d5bafca25d2ddc5ef3c867d53e794e894c3ec9718548b624bf51843dbb9234d7"} Dec 11 08:44:14 crc kubenswrapper[4992]: I1211 08:44:14.813681 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74db564d44-wj6gh" event={"ID":"1215b406-66dc-4132-a0ea-76010ee7b44d","Type":"ContainerStarted","Data":"a8a1ff4cf4b150e866c78e2e83d99a1a71e3926678eebc5ae18028b037bc2d3a"} Dec 11 08:44:14 crc kubenswrapper[4992]: I1211 08:44:14.818178 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerStarted","Data":"5648e382dce9e69198a13d156f5f6a3145ca3ea7b7afcbabd905957b155749c9"} Dec 11 08:44:14 crc kubenswrapper[4992]: I1211 08:44:14.818547 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 08:44:14 crc kubenswrapper[4992]: I1211 08:44:14.845691 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.398044158 podStartE2EDuration="8.845666908s" podCreationTimestamp="2025-12-11 08:44:06 +0000 UTC" firstStartedPulling="2025-12-11 08:44:07.436876832 +0000 UTC m=+1271.696350748" lastFinishedPulling="2025-12-11 08:44:13.884499562 +0000 UTC m=+1278.143973498" observedRunningTime="2025-12-11 08:44:14.83844377 +0000 UTC m=+1279.097917716" watchObservedRunningTime="2025-12-11 08:44:14.845666908 +0000 UTC m=+1279.105140844" Dec 11 08:44:15 crc kubenswrapper[4992]: I1211 08:44:15.828961 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74db564d44-wj6gh" event={"ID":"1215b406-66dc-4132-a0ea-76010ee7b44d","Type":"ContainerStarted","Data":"e2361bbe15a25f6d66742b380ecd8b2993baf63c552f8e7c7ea2552133af50cf"} Dec 11 08:44:15 crc kubenswrapper[4992]: I1211 08:44:15.831017 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:15 crc kubenswrapper[4992]: I1211 08:44:15.831250 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:15 crc kubenswrapper[4992]: I1211 08:44:15.864013 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-74db564d44-wj6gh" podStartSLOduration=3.863986644 podStartE2EDuration="3.863986644s" podCreationTimestamp="2025-12-11 08:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:15.862252462 +0000 UTC m=+1280.121726428" watchObservedRunningTime="2025-12-11 08:44:15.863986644 +0000 UTC m=+1280.123460580" Dec 11 08:44:17 crc kubenswrapper[4992]: I1211 08:44:17.845857 4992 generic.go:334] "Generic (PLEG): container finished" podID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerID="d65b0fba99de8e51ce552bd8878515eec8d287c8f746937db41d8b1365b67a0e" exitCode=137 Dec 11 08:44:17 crc kubenswrapper[4992]: I1211 08:44:17.845946 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cbdd6686-ddpfq" event={"ID":"4499ae00-40e9-4f82-a285-b4962cbc3c61","Type":"ContainerDied","Data":"d65b0fba99de8e51ce552bd8878515eec8d287c8f746937db41d8b1365b67a0e"} Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.418009 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.493599 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-config-data\") pod \"4499ae00-40e9-4f82-a285-b4962cbc3c61\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.493675 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-scripts\") pod \"4499ae00-40e9-4f82-a285-b4962cbc3c61\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.493782 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4499ae00-40e9-4f82-a285-b4962cbc3c61-logs\") pod \"4499ae00-40e9-4f82-a285-b4962cbc3c61\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.493864 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rmlm\" (UniqueName: \"kubernetes.io/projected/4499ae00-40e9-4f82-a285-b4962cbc3c61-kube-api-access-4rmlm\") pod \"4499ae00-40e9-4f82-a285-b4962cbc3c61\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.493902 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-secret-key\") pod \"4499ae00-40e9-4f82-a285-b4962cbc3c61\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.493994 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-combined-ca-bundle\") pod \"4499ae00-40e9-4f82-a285-b4962cbc3c61\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.494015 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-tls-certs\") pod \"4499ae00-40e9-4f82-a285-b4962cbc3c61\" (UID: \"4499ae00-40e9-4f82-a285-b4962cbc3c61\") " Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.495244 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4499ae00-40e9-4f82-a285-b4962cbc3c61-logs" (OuterVolumeSpecName: "logs") pod "4499ae00-40e9-4f82-a285-b4962cbc3c61" (UID: "4499ae00-40e9-4f82-a285-b4962cbc3c61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.500076 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4499ae00-40e9-4f82-a285-b4962cbc3c61-kube-api-access-4rmlm" (OuterVolumeSpecName: "kube-api-access-4rmlm") pod "4499ae00-40e9-4f82-a285-b4962cbc3c61" (UID: "4499ae00-40e9-4f82-a285-b4962cbc3c61"). InnerVolumeSpecName "kube-api-access-4rmlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.500680 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4499ae00-40e9-4f82-a285-b4962cbc3c61" (UID: "4499ae00-40e9-4f82-a285-b4962cbc3c61"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.533184 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-config-data" (OuterVolumeSpecName: "config-data") pod "4499ae00-40e9-4f82-a285-b4962cbc3c61" (UID: "4499ae00-40e9-4f82-a285-b4962cbc3c61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.546259 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "4499ae00-40e9-4f82-a285-b4962cbc3c61" (UID: "4499ae00-40e9-4f82-a285-b4962cbc3c61"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.551875 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4499ae00-40e9-4f82-a285-b4962cbc3c61" (UID: "4499ae00-40e9-4f82-a285-b4962cbc3c61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.552445 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-scripts" (OuterVolumeSpecName: "scripts") pod "4499ae00-40e9-4f82-a285-b4962cbc3c61" (UID: "4499ae00-40e9-4f82-a285-b4962cbc3c61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.595544 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4499ae00-40e9-4f82-a285-b4962cbc3c61-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.595579 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rmlm\" (UniqueName: \"kubernetes.io/projected/4499ae00-40e9-4f82-a285-b4962cbc3c61-kube-api-access-4rmlm\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.595594 4992 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.595604 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.595615 4992 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4499ae00-40e9-4f82-a285-b4962cbc3c61-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.595624 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.595649 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4499ae00-40e9-4f82-a285-b4962cbc3c61-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.858515 4992 generic.go:334] "Generic (PLEG): container finished" podID="afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" containerID="ce590d6db83eb7bb2add34306fa84d77454d77c15226f1f7e94828e6912c04da" exitCode=0 Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.858597 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v95pn" event={"ID":"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b","Type":"ContainerDied","Data":"ce590d6db83eb7bb2add34306fa84d77454d77c15226f1f7e94828e6912c04da"} Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.862005 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cbdd6686-ddpfq" event={"ID":"4499ae00-40e9-4f82-a285-b4962cbc3c61","Type":"ContainerDied","Data":"fe2dd8dccf4ee282c6008cd9313473da34ed47bc0b622b48fd1db5dd5cf86440"} Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.862054 4992 scope.go:117] "RemoveContainer" containerID="271c57eec089769bbb4ceebf12e5b27ed20fe1df01b7ed3ff394d432f49782ec" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.862140 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cbdd6686-ddpfq" Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.905980 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55cbdd6686-ddpfq"] Dec 11 08:44:18 crc kubenswrapper[4992]: I1211 08:44:18.916853 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-55cbdd6686-ddpfq"] Dec 11 08:44:19 crc kubenswrapper[4992]: I1211 08:44:19.050452 4992 scope.go:117] "RemoveContainer" containerID="d65b0fba99de8e51ce552bd8878515eec8d287c8f746937db41d8b1365b67a0e" Dec 11 08:44:19 crc kubenswrapper[4992]: I1211 08:44:19.575503 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7ccd8c54fd-6rk8g" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.107610 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" path="/var/lib/kubelet/pods/4499ae00-40e9-4f82-a285-b4962cbc3c61/volumes" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.206448 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v95pn" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.326458 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c8bl\" (UniqueName: \"kubernetes.io/projected/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-kube-api-access-4c8bl\") pod \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.326574 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-db-sync-config-data\") pod \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.326616 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-combined-ca-bundle\") pod \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\" (UID: \"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b\") " Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.341979 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-kube-api-access-4c8bl" (OuterVolumeSpecName: "kube-api-access-4c8bl") pod "afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" (UID: "afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b"). InnerVolumeSpecName "kube-api-access-4c8bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.342847 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" (UID: "afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.397789 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" (UID: "afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.429037 4992 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.429068 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.429078 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c8bl\" (UniqueName: \"kubernetes.io/projected/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b-kube-api-access-4c8bl\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.647187 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 11 08:44:20 crc kubenswrapper[4992]: E1211 08:44:20.647664 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.647684 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon" Dec 11 08:44:20 crc kubenswrapper[4992]: E1211 08:44:20.647700 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon-log" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.647707 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon-log" Dec 11 08:44:20 crc kubenswrapper[4992]: E1211 08:44:20.647729 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" containerName="barbican-db-sync" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.647736 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" containerName="barbican-db-sync" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.647943 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.647977 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="4499ae00-40e9-4f82-a285-b4962cbc3c61" containerName="horizon-log" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.647991 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" containerName="barbican-db-sync" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.648729 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.651776 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-d8r9t" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.652121 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.660040 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.660066 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.732864 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/421fdf51-5a39-4d80-b066-a715006c2f85-openstack-config-secret\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.732962 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/421fdf51-5a39-4d80-b066-a715006c2f85-openstack-config\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.733029 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421fdf51-5a39-4d80-b066-a715006c2f85-combined-ca-bundle\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.733050 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvsrz\" (UniqueName: \"kubernetes.io/projected/421fdf51-5a39-4d80-b066-a715006c2f85-kube-api-access-pvsrz\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.835005 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/421fdf51-5a39-4d80-b066-a715006c2f85-openstack-config-secret\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.835104 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/421fdf51-5a39-4d80-b066-a715006c2f85-openstack-config\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.835186 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421fdf51-5a39-4d80-b066-a715006c2f85-combined-ca-bundle\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.835228 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvsrz\" (UniqueName: \"kubernetes.io/projected/421fdf51-5a39-4d80-b066-a715006c2f85-kube-api-access-pvsrz\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.836361 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/421fdf51-5a39-4d80-b066-a715006c2f85-openstack-config\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.838241 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/421fdf51-5a39-4d80-b066-a715006c2f85-openstack-config-secret\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.839288 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/421fdf51-5a39-4d80-b066-a715006c2f85-combined-ca-bundle\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.854102 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvsrz\" (UniqueName: \"kubernetes.io/projected/421fdf51-5a39-4d80-b066-a715006c2f85-kube-api-access-pvsrz\") pod \"openstackclient\" (UID: \"421fdf51-5a39-4d80-b066-a715006c2f85\") " pod="openstack/openstackclient" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.879167 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-v95pn" event={"ID":"afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b","Type":"ContainerDied","Data":"11dd8f3e8c55dc9a5359786a2744d6b901deaed55394543c3066ebe555da5fb9"} Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.879206 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11dd8f3e8c55dc9a5359786a2744d6b901deaed55394543c3066ebe555da5fb9" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.879276 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-v95pn" Dec 11 08:44:20 crc kubenswrapper[4992]: I1211 08:44:20.966939 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.040736 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-76f54b9b99-lv6z6"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.043114 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.051223 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cqxff" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.051291 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.051365 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.068298 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-67cb46677b-w6zfw"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.070256 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.073782 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.097792 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76f54b9b99-lv6z6"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.129005 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67cb46677b-w6zfw"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.148882 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdf31db-16c4-4bfc-bb50-27a283b61abd-config-data\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.149045 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-combined-ca-bundle\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.149076 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7g2\" (UniqueName: \"kubernetes.io/projected/8cdf31db-16c4-4bfc-bb50-27a283b61abd-kube-api-access-sj7g2\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.149110 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-config-data\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.149148 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9vp5\" (UniqueName: \"kubernetes.io/projected/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-kube-api-access-p9vp5\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.149173 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-config-data-custom\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.149214 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-logs\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.149277 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdf31db-16c4-4bfc-bb50-27a283b61abd-logs\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.149343 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cdf31db-16c4-4bfc-bb50-27a283b61abd-config-data-custom\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.149376 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdf31db-16c4-4bfc-bb50-27a283b61abd-combined-ca-bundle\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.165537 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-c5f6c"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.174674 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.181181 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-c5f6c"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.252744 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-logs\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.252812 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdf31db-16c4-4bfc-bb50-27a283b61abd-logs\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.252847 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cdf31db-16c4-4bfc-bb50-27a283b61abd-config-data-custom\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.252867 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdf31db-16c4-4bfc-bb50-27a283b61abd-combined-ca-bundle\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.252895 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdf31db-16c4-4bfc-bb50-27a283b61abd-config-data\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.252946 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-config\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.252965 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.252995 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.253037 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.253053 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wsjj\" (UniqueName: \"kubernetes.io/projected/c4783cdc-5222-45f5-b56c-f04c06cf7df7-kube-api-access-7wsjj\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.253072 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-combined-ca-bundle\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.253088 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj7g2\" (UniqueName: \"kubernetes.io/projected/8cdf31db-16c4-4bfc-bb50-27a283b61abd-kube-api-access-sj7g2\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.253111 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-config-data\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.253143 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9vp5\" (UniqueName: \"kubernetes.io/projected/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-kube-api-access-p9vp5\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.253167 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-config-data-custom\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.253193 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.253570 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-logs\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.255561 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cdf31db-16c4-4bfc-bb50-27a283b61abd-logs\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.262480 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-combined-ca-bundle\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.266597 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cdf31db-16c4-4bfc-bb50-27a283b61abd-config-data\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.268435 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-config-data-custom\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.270436 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cdf31db-16c4-4bfc-bb50-27a283b61abd-config-data-custom\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.284923 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-config-data\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.290975 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj7g2\" (UniqueName: \"kubernetes.io/projected/8cdf31db-16c4-4bfc-bb50-27a283b61abd-kube-api-access-sj7g2\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.291812 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cdf31db-16c4-4bfc-bb50-27a283b61abd-combined-ca-bundle\") pod \"barbican-worker-76f54b9b99-lv6z6\" (UID: \"8cdf31db-16c4-4bfc-bb50-27a283b61abd\") " pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.293978 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6886c495d8-l2qjd"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.297549 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.299467 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9vp5\" (UniqueName: \"kubernetes.io/projected/a69c55bb-ed74-4b63-a8b6-713b08b1dcb4-kube-api-access-p9vp5\") pod \"barbican-keystone-listener-67cb46677b-w6zfw\" (UID: \"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4\") " pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.301974 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.332710 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6886c495d8-l2qjd"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354531 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-config\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354574 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data-custom\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354591 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-combined-ca-bundle\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354610 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354627 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354667 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354714 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354730 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wsjj\" (UniqueName: \"kubernetes.io/projected/c4783cdc-5222-45f5-b56c-f04c06cf7df7-kube-api-access-7wsjj\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354768 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354792 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kk59\" (UniqueName: \"kubernetes.io/projected/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-kube-api-access-6kk59\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.354831 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-logs\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.355875 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-config\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.356405 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.357075 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.357585 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.358362 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.374352 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wsjj\" (UniqueName: \"kubernetes.io/projected/c4783cdc-5222-45f5-b56c-f04c06cf7df7-kube-api-access-7wsjj\") pod \"dnsmasq-dns-59d5ff467f-c5f6c\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.397055 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76f54b9b99-lv6z6" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.456236 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kk59\" (UniqueName: \"kubernetes.io/projected/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-kube-api-access-6kk59\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.456301 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-logs\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.456379 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data-custom\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.456396 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-combined-ca-bundle\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.456412 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.459097 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-logs\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.461817 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data-custom\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.462763 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-combined-ca-bundle\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.462899 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.471052 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.472301 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kk59\" (UniqueName: \"kubernetes.io/projected/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-kube-api-access-6kk59\") pod \"barbican-api-6886c495d8-l2qjd\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.537709 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.604584 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.670228 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:21 crc kubenswrapper[4992]: W1211 08:44:21.892426 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cdf31db_16c4_4bfc_bb50_27a283b61abd.slice/crio-c32aef1eaf5db0b0ab930f492094c10437f7d1955642d722f7befd98165875a8 WatchSource:0}: Error finding container c32aef1eaf5db0b0ab930f492094c10437f7d1955642d722f7befd98165875a8: Status 404 returned error can't find the container with id c32aef1eaf5db0b0ab930f492094c10437f7d1955642d722f7befd98165875a8 Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.900204 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76f54b9b99-lv6z6"] Dec 11 08:44:21 crc kubenswrapper[4992]: I1211 08:44:21.901236 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"421fdf51-5a39-4d80-b066-a715006c2f85","Type":"ContainerStarted","Data":"abab8bcd8cfc2fe8b483649360d089a077489051f962cf0388c1b5eba87521b1"} Dec 11 08:44:22 crc kubenswrapper[4992]: W1211 08:44:22.025146 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda69c55bb_ed74_4b63_a8b6_713b08b1dcb4.slice/crio-6d4af57e8e849a6fb98f3e9ff9da1097c67192a8238f9073bb423c2de5dcbc95 WatchSource:0}: Error finding container 6d4af57e8e849a6fb98f3e9ff9da1097c67192a8238f9073bb423c2de5dcbc95: Status 404 returned error can't find the container with id 6d4af57e8e849a6fb98f3e9ff9da1097c67192a8238f9073bb423c2de5dcbc95 Dec 11 08:44:22 crc kubenswrapper[4992]: I1211 08:44:22.025180 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-67cb46677b-w6zfw"] Dec 11 08:44:22 crc kubenswrapper[4992]: I1211 08:44:22.042753 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-c5f6c"] Dec 11 08:44:22 crc kubenswrapper[4992]: I1211 08:44:22.051260 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6886c495d8-l2qjd"] Dec 11 08:44:22 crc kubenswrapper[4992]: W1211 08:44:22.054875 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4783cdc_5222_45f5_b56c_f04c06cf7df7.slice/crio-b293007af51f2e848a23bd23e603fe07d8613cda03fee73433c148dd59f63e3c WatchSource:0}: Error finding container b293007af51f2e848a23bd23e603fe07d8613cda03fee73433c148dd59f63e3c: Status 404 returned error can't find the container with id b293007af51f2e848a23bd23e603fe07d8613cda03fee73433c148dd59f63e3c Dec 11 08:44:22 crc kubenswrapper[4992]: W1211 08:44:22.055528 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ccc6dfb_e898_4b91_b5ff_4d33ede78901.slice/crio-1bbb242ce2be8fade241362640d7c3ee0cbf72f333b48228a322a7ad72568aac WatchSource:0}: Error finding container 1bbb242ce2be8fade241362640d7c3ee0cbf72f333b48228a322a7ad72568aac: Status 404 returned error can't find the container with id 1bbb242ce2be8fade241362640d7c3ee0cbf72f333b48228a322a7ad72568aac Dec 11 08:44:22 crc kubenswrapper[4992]: I1211 08:44:22.924143 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" event={"ID":"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4","Type":"ContainerStarted","Data":"6d4af57e8e849a6fb98f3e9ff9da1097c67192a8238f9073bb423c2de5dcbc95"} Dec 11 08:44:22 crc kubenswrapper[4992]: I1211 08:44:22.937055 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6886c495d8-l2qjd" event={"ID":"6ccc6dfb-e898-4b91-b5ff-4d33ede78901","Type":"ContainerStarted","Data":"27b729564b62fed56080caa742d27b5fad8943e96075bcc943cbb104188cbf18"} Dec 11 08:44:22 crc kubenswrapper[4992]: I1211 08:44:22.937105 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6886c495d8-l2qjd" event={"ID":"6ccc6dfb-e898-4b91-b5ff-4d33ede78901","Type":"ContainerStarted","Data":"1bbb242ce2be8fade241362640d7c3ee0cbf72f333b48228a322a7ad72568aac"} Dec 11 08:44:22 crc kubenswrapper[4992]: I1211 08:44:22.939807 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" event={"ID":"c4783cdc-5222-45f5-b56c-f04c06cf7df7","Type":"ContainerStarted","Data":"70d24ddea37ae82981754533a8c4fe993aa2452f49bbfb7966caf8de5527010d"} Dec 11 08:44:22 crc kubenswrapper[4992]: I1211 08:44:22.939837 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" event={"ID":"c4783cdc-5222-45f5-b56c-f04c06cf7df7","Type":"ContainerStarted","Data":"b293007af51f2e848a23bd23e603fe07d8613cda03fee73433c148dd59f63e3c"} Dec 11 08:44:22 crc kubenswrapper[4992]: I1211 08:44:22.942223 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76f54b9b99-lv6z6" event={"ID":"8cdf31db-16c4-4bfc-bb50-27a283b61abd","Type":"ContainerStarted","Data":"c32aef1eaf5db0b0ab930f492094c10437f7d1955642d722f7befd98165875a8"} Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.741357 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-859586f498-26phb"] Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.743481 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.745789 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.746070 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.757807 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-859586f498-26phb"] Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.815414 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4282024f-9d71-4b55-aa65-b0a91e76da62-logs\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.815477 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-config-data\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.815571 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-config-data-custom\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.815622 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-public-tls-certs\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.815741 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-combined-ca-bundle\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.815768 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjr46\" (UniqueName: \"kubernetes.io/projected/4282024f-9d71-4b55-aa65-b0a91e76da62-kube-api-access-kjr46\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.815832 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-internal-tls-certs\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.919233 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-combined-ca-bundle\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.919273 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjr46\" (UniqueName: \"kubernetes.io/projected/4282024f-9d71-4b55-aa65-b0a91e76da62-kube-api-access-kjr46\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.919302 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-internal-tls-certs\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.919362 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4282024f-9d71-4b55-aa65-b0a91e76da62-logs\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.919379 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-config-data\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.919400 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-config-data-custom\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.919434 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-public-tls-certs\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.920157 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4282024f-9d71-4b55-aa65-b0a91e76da62-logs\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.924369 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-public-tls-certs\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.925682 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-config-data-custom\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.926595 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-combined-ca-bundle\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.927380 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-internal-tls-certs\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.928595 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4282024f-9d71-4b55-aa65-b0a91e76da62-config-data\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.937980 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjr46\" (UniqueName: \"kubernetes.io/projected/4282024f-9d71-4b55-aa65-b0a91e76da62-kube-api-access-kjr46\") pod \"barbican-api-859586f498-26phb\" (UID: \"4282024f-9d71-4b55-aa65-b0a91e76da62\") " pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.959991 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6886c495d8-l2qjd" event={"ID":"6ccc6dfb-e898-4b91-b5ff-4d33ede78901","Type":"ContainerStarted","Data":"128fa2ace3ab60ab82e383dd1c08edfea8b03168fa89784e7928b1bb654e9f64"} Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.960780 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.960884 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.964329 4992 generic.go:334] "Generic (PLEG): container finished" podID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerID="70d24ddea37ae82981754533a8c4fe993aa2452f49bbfb7966caf8de5527010d" exitCode=0 Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.964362 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" event={"ID":"c4783cdc-5222-45f5-b56c-f04c06cf7df7","Type":"ContainerDied","Data":"70d24ddea37ae82981754533a8c4fe993aa2452f49bbfb7966caf8de5527010d"} Dec 11 08:44:23 crc kubenswrapper[4992]: I1211 08:44:23.992227 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6886c495d8-l2qjd" podStartSLOduration=2.992207943 podStartE2EDuration="2.992207943s" podCreationTimestamp="2025-12-11 08:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:23.983257184 +0000 UTC m=+1288.242731120" watchObservedRunningTime="2025-12-11 08:44:23.992207943 +0000 UTC m=+1288.251681859" Dec 11 08:44:24 crc kubenswrapper[4992]: I1211 08:44:24.081588 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:24 crc kubenswrapper[4992]: I1211 08:44:24.801872 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-859586f498-26phb"] Dec 11 08:44:24 crc kubenswrapper[4992]: W1211 08:44:24.805338 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4282024f_9d71_4b55_aa65_b0a91e76da62.slice/crio-6bf5958b55cb211f1dad11fa4bb15d677506a1627eab8d769956d251aa4e3e02 WatchSource:0}: Error finding container 6bf5958b55cb211f1dad11fa4bb15d677506a1627eab8d769956d251aa4e3e02: Status 404 returned error can't find the container with id 6bf5958b55cb211f1dad11fa4bb15d677506a1627eab8d769956d251aa4e3e02 Dec 11 08:44:24 crc kubenswrapper[4992]: I1211 08:44:24.977756 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76f54b9b99-lv6z6" event={"ID":"8cdf31db-16c4-4bfc-bb50-27a283b61abd","Type":"ContainerStarted","Data":"09d78df482f75f6b1d48f18d3cafd5b15b551abc56a9ff00469d048a4029b86d"} Dec 11 08:44:24 crc kubenswrapper[4992]: I1211 08:44:24.979452 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-859586f498-26phb" event={"ID":"4282024f-9d71-4b55-aa65-b0a91e76da62","Type":"ContainerStarted","Data":"6bf5958b55cb211f1dad11fa4bb15d677506a1627eab8d769956d251aa4e3e02"} Dec 11 08:44:24 crc kubenswrapper[4992]: I1211 08:44:24.984295 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" event={"ID":"c4783cdc-5222-45f5-b56c-f04c06cf7df7","Type":"ContainerStarted","Data":"b0b11ff2f87845f030600f4762eb89d38fafa28b3949298e650fb7a79f8f0a0e"} Dec 11 08:44:24 crc kubenswrapper[4992]: I1211 08:44:24.984481 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:24 crc kubenswrapper[4992]: I1211 08:44:24.987826 4992 generic.go:334] "Generic (PLEG): container finished" podID="429dae0d-117e-4943-966e-11460f9676b7" containerID="22d72687af96c39323b29da101171820fbb3d544852bf7b6e45acf5c8555cf8e" exitCode=0 Dec 11 08:44:24 crc kubenswrapper[4992]: I1211 08:44:24.987921 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-thwnm" event={"ID":"429dae0d-117e-4943-966e-11460f9676b7","Type":"ContainerDied","Data":"22d72687af96c39323b29da101171820fbb3d544852bf7b6e45acf5c8555cf8e"} Dec 11 08:44:25 crc kubenswrapper[4992]: I1211 08:44:25.006923 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" podStartSLOduration=4.006905361 podStartE2EDuration="4.006905361s" podCreationTimestamp="2025-12-11 08:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:25.005566978 +0000 UTC m=+1289.265040914" watchObservedRunningTime="2025-12-11 08:44:25.006905361 +0000 UTC m=+1289.266379287" Dec 11 08:44:25 crc kubenswrapper[4992]: I1211 08:44:25.998577 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-859586f498-26phb" event={"ID":"4282024f-9d71-4b55-aa65-b0a91e76da62","Type":"ContainerStarted","Data":"936242819f1811ef925762c2058e2abc6277dc6031ba0e8f1b8f11ee080514c2"} Dec 11 08:44:25 crc kubenswrapper[4992]: I1211 08:44:25.999160 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-859586f498-26phb" event={"ID":"4282024f-9d71-4b55-aa65-b0a91e76da62","Type":"ContainerStarted","Data":"cb72ecb0f68f365e3a8564e1a4b2abe901f02efb31a65d1bbe13842e2a766a50"} Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.000468 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.000492 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.008166 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76f54b9b99-lv6z6" event={"ID":"8cdf31db-16c4-4bfc-bb50-27a283b61abd","Type":"ContainerStarted","Data":"2e72a0b8bd8139e89924ec085739c043034bee657137d8e4c569ca1295b226d1"} Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.010872 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" event={"ID":"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4","Type":"ContainerStarted","Data":"3c78d0ad0fd97ed2c296b4d16db7320ca35632b8b1f30a610bf2f3c12440ba86"} Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.010898 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" event={"ID":"a69c55bb-ed74-4b63-a8b6-713b08b1dcb4","Type":"ContainerStarted","Data":"0f889a570dc7a3214127d4680abf76048ee30f9d40bbfbc682863e354124a98f"} Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.016349 4992 generic.go:334] "Generic (PLEG): container finished" podID="73c99101-825a-4a3b-acf0-7fc522f3631f" containerID="ae4ff2d1d3811f0275fe7a48683a0f8b28982a4aeedd7083fc8e86a3c3c00288" exitCode=0 Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.017318 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8mmcr" event={"ID":"73c99101-825a-4a3b-acf0-7fc522f3631f","Type":"ContainerDied","Data":"ae4ff2d1d3811f0275fe7a48683a0f8b28982a4aeedd7083fc8e86a3c3c00288"} Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.023382 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-859586f498-26phb" podStartSLOduration=3.0233055 podStartE2EDuration="3.0233055s" podCreationTimestamp="2025-12-11 08:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:26.019503467 +0000 UTC m=+1290.278977393" watchObservedRunningTime="2025-12-11 08:44:26.0233055 +0000 UTC m=+1290.282779426" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.154052 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-76f54b9b99-lv6z6" podStartSLOduration=3.784833793 podStartE2EDuration="6.154024122s" podCreationTimestamp="2025-12-11 08:44:20 +0000 UTC" firstStartedPulling="2025-12-11 08:44:21.894970266 +0000 UTC m=+1286.154444192" lastFinishedPulling="2025-12-11 08:44:24.264160595 +0000 UTC m=+1288.523634521" observedRunningTime="2025-12-11 08:44:26.035982281 +0000 UTC m=+1290.295456197" watchObservedRunningTime="2025-12-11 08:44:26.154024122 +0000 UTC m=+1290.413498048" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.195484 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-67cb46677b-w6zfw" podStartSLOduration=1.9329893249999999 podStartE2EDuration="5.195464048s" podCreationTimestamp="2025-12-11 08:44:21 +0000 UTC" firstStartedPulling="2025-12-11 08:44:22.033353496 +0000 UTC m=+1286.292827422" lastFinishedPulling="2025-12-11 08:44:25.295828219 +0000 UTC m=+1289.555302145" observedRunningTime="2025-12-11 08:44:26.070096306 +0000 UTC m=+1290.329570232" watchObservedRunningTime="2025-12-11 08:44:26.195464048 +0000 UTC m=+1290.454937974" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.506238 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-thwnm" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.583223 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qj9w\" (UniqueName: \"kubernetes.io/projected/429dae0d-117e-4943-966e-11460f9676b7-kube-api-access-9qj9w\") pod \"429dae0d-117e-4943-966e-11460f9676b7\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.583316 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-config\") pod \"429dae0d-117e-4943-966e-11460f9676b7\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.583348 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-combined-ca-bundle\") pod \"429dae0d-117e-4943-966e-11460f9676b7\" (UID: \"429dae0d-117e-4943-966e-11460f9676b7\") " Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.588470 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/429dae0d-117e-4943-966e-11460f9676b7-kube-api-access-9qj9w" (OuterVolumeSpecName: "kube-api-access-9qj9w") pod "429dae0d-117e-4943-966e-11460f9676b7" (UID: "429dae0d-117e-4943-966e-11460f9676b7"). InnerVolumeSpecName "kube-api-access-9qj9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.614269 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-config" (OuterVolumeSpecName: "config") pod "429dae0d-117e-4943-966e-11460f9676b7" (UID: "429dae0d-117e-4943-966e-11460f9676b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.630105 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "429dae0d-117e-4943-966e-11460f9676b7" (UID: "429dae0d-117e-4943-966e-11460f9676b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.685783 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qj9w\" (UniqueName: \"kubernetes.io/projected/429dae0d-117e-4943-966e-11460f9676b7-kube-api-access-9qj9w\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.685825 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:26 crc kubenswrapper[4992]: I1211 08:44:26.685840 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/429dae0d-117e-4943-966e-11460f9676b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.047720 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-thwnm" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.048263 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-thwnm" event={"ID":"429dae0d-117e-4943-966e-11460f9676b7","Type":"ContainerDied","Data":"a95ff77a87e9826a646deb8b48ae2ab75d20e2101e2b691b22a70cbd82c46c5c"} Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.048385 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a95ff77a87e9826a646deb8b48ae2ab75d20e2101e2b691b22a70cbd82c46c5c" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.212005 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-c5f6c"] Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.212219 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="dnsmasq-dns" containerID="cri-o://b0b11ff2f87845f030600f4762eb89d38fafa28b3949298e650fb7a79f8f0a0e" gracePeriod=10 Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.239140 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-5j6cp"] Dec 11 08:44:27 crc kubenswrapper[4992]: E1211 08:44:27.240231 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="429dae0d-117e-4943-966e-11460f9676b7" containerName="neutron-db-sync" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.240254 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="429dae0d-117e-4943-966e-11460f9676b7" containerName="neutron-db-sync" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.240526 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="429dae0d-117e-4943-966e-11460f9676b7" containerName="neutron-db-sync" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.241535 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.269953 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-5j6cp"] Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.302103 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.302156 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-config\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.302232 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.302315 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.302343 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.302437 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vsq5\" (UniqueName: \"kubernetes.io/projected/4e585d4a-7315-4ad3-a670-4b7eff004054-kube-api-access-2vsq5\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.404528 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.404927 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.405025 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vsq5\" (UniqueName: \"kubernetes.io/projected/4e585d4a-7315-4ad3-a670-4b7eff004054-kube-api-access-2vsq5\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.405150 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.405185 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-config\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.405257 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.406028 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.406299 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.406550 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.406676 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.429803 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vsq5\" (UniqueName: \"kubernetes.io/projected/4e585d4a-7315-4ad3-a670-4b7eff004054-kube-api-access-2vsq5\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.492491 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.607492 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-combined-ca-bundle\") pod \"73c99101-825a-4a3b-acf0-7fc522f3631f\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.607575 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c99101-825a-4a3b-acf0-7fc522f3631f-etc-machine-id\") pod \"73c99101-825a-4a3b-acf0-7fc522f3631f\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.607618 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-db-sync-config-data\") pod \"73c99101-825a-4a3b-acf0-7fc522f3631f\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.607749 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73c99101-825a-4a3b-acf0-7fc522f3631f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "73c99101-825a-4a3b-acf0-7fc522f3631f" (UID: "73c99101-825a-4a3b-acf0-7fc522f3631f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.607834 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpt6c\" (UniqueName: \"kubernetes.io/projected/73c99101-825a-4a3b-acf0-7fc522f3631f-kube-api-access-gpt6c\") pod \"73c99101-825a-4a3b-acf0-7fc522f3631f\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.607903 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-config-data\") pod \"73c99101-825a-4a3b-acf0-7fc522f3631f\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.607960 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-scripts\") pod \"73c99101-825a-4a3b-acf0-7fc522f3631f\" (UID: \"73c99101-825a-4a3b-acf0-7fc522f3631f\") " Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.608569 4992 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c99101-825a-4a3b-acf0-7fc522f3631f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.612739 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c99101-825a-4a3b-acf0-7fc522f3631f-kube-api-access-gpt6c" (OuterVolumeSpecName: "kube-api-access-gpt6c") pod "73c99101-825a-4a3b-acf0-7fc522f3631f" (UID: "73c99101-825a-4a3b-acf0-7fc522f3631f"). InnerVolumeSpecName "kube-api-access-gpt6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.614040 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-scripts" (OuterVolumeSpecName: "scripts") pod "73c99101-825a-4a3b-acf0-7fc522f3631f" (UID: "73c99101-825a-4a3b-acf0-7fc522f3631f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.629807 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "73c99101-825a-4a3b-acf0-7fc522f3631f" (UID: "73c99101-825a-4a3b-acf0-7fc522f3631f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.710083 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpt6c\" (UniqueName: \"kubernetes.io/projected/73c99101-825a-4a3b-acf0-7fc522f3631f-kube-api-access-gpt6c\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.710122 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.710134 4992 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.719155 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73c99101-825a-4a3b-acf0-7fc522f3631f" (UID: "73c99101-825a-4a3b-acf0-7fc522f3631f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.743791 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-config-data" (OuterVolumeSpecName: "config-data") pod "73c99101-825a-4a3b-acf0-7fc522f3631f" (UID: "73c99101-825a-4a3b-acf0-7fc522f3631f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.818570 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:27 crc kubenswrapper[4992]: I1211 08:44:27.818615 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c99101-825a-4a3b-acf0-7fc522f3631f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.062503 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-8mmcr" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.062560 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-8mmcr" event={"ID":"73c99101-825a-4a3b-acf0-7fc522f3631f","Type":"ContainerDied","Data":"4b4e44e36bedd6e6c89be54ccee98c7ac8796a1d74267d953383fa30fd68de0f"} Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.062586 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b4e44e36bedd6e6c89be54ccee98c7ac8796a1d74267d953383fa30fd68de0f" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.124408 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-d94866685-kpw9g"] Dec 11 08:44:28 crc kubenswrapper[4992]: E1211 08:44:28.124697 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c99101-825a-4a3b-acf0-7fc522f3631f" containerName="cinder-db-sync" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.124708 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c99101-825a-4a3b-acf0-7fc522f3631f" containerName="cinder-db-sync" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.124902 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c99101-825a-4a3b-acf0-7fc522f3631f" containerName="cinder-db-sync" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.125779 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.135139 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.135226 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.140763 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.146293 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d94866685-kpw9g"] Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.226860 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-public-tls-certs\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.226934 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10bd5d3-8ab6-4950-96e9-b683e47619ea-log-httpd\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.226961 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-internal-tls-certs\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.227008 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f10bd5d3-8ab6-4950-96e9-b683e47619ea-etc-swift\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.227024 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-combined-ca-bundle\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.227063 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w84j\" (UniqueName: \"kubernetes.io/projected/f10bd5d3-8ab6-4950-96e9-b683e47619ea-kube-api-access-9w84j\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.227080 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-config-data\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.227101 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10bd5d3-8ab6-4950-96e9-b683e47619ea-run-httpd\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.328884 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10bd5d3-8ab6-4950-96e9-b683e47619ea-run-httpd\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.329025 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-public-tls-certs\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.329064 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10bd5d3-8ab6-4950-96e9-b683e47619ea-log-httpd\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.329086 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-internal-tls-certs\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.329135 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f10bd5d3-8ab6-4950-96e9-b683e47619ea-etc-swift\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.329160 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-combined-ca-bundle\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.329211 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w84j\" (UniqueName: \"kubernetes.io/projected/f10bd5d3-8ab6-4950-96e9-b683e47619ea-kube-api-access-9w84j\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.329232 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-config-data\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.330362 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10bd5d3-8ab6-4950-96e9-b683e47619ea-run-httpd\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.332000 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10bd5d3-8ab6-4950-96e9-b683e47619ea-log-httpd\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.336394 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-config-data\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.336731 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-internal-tls-certs\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.338369 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f10bd5d3-8ab6-4950-96e9-b683e47619ea-etc-swift\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.342650 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-combined-ca-bundle\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.344159 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10bd5d3-8ab6-4950-96e9-b683e47619ea-public-tls-certs\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.355223 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w84j\" (UniqueName: \"kubernetes.io/projected/f10bd5d3-8ab6-4950-96e9-b683e47619ea-kube-api-access-9w84j\") pod \"swift-proxy-d94866685-kpw9g\" (UID: \"f10bd5d3-8ab6-4950-96e9-b683e47619ea\") " pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.462755 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.568157 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-config\") pod \"dnsmasq-dns-75c8ddd69c-5j6cp\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.654774 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66db4d95cb-74j4r"] Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.657027 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.666607 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.667087 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pvh9n" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.667153 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.667252 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.678306 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.679923 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.686360 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.686667 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p8krz" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.686800 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.690082 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.698056 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66db4d95cb-74j4r"] Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.711182 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740300 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-ovndb-tls-certs\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740370 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50b99209-7df9-4ae4-9795-37a362cd6373-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740392 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-combined-ca-bundle\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740439 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740458 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-config\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740497 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69zpd\" (UniqueName: \"kubernetes.io/projected/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-kube-api-access-69zpd\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740532 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-httpd-config\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740597 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmgz7\" (UniqueName: \"kubernetes.io/projected/50b99209-7df9-4ae4-9795-37a362cd6373-kube-api-access-rmgz7\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740662 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740681 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.740705 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-scripts\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.761745 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.775869 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-5j6cp"] Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.808408 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-wxwh2"] Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.810590 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.815305 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-wxwh2"] Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842013 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842074 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-config\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842092 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69zpd\" (UniqueName: \"kubernetes.io/projected/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-kube-api-access-69zpd\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842145 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-httpd-config\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842220 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmgz7\" (UniqueName: \"kubernetes.io/projected/50b99209-7df9-4ae4-9795-37a362cd6373-kube-api-access-rmgz7\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842250 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842288 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842308 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-scripts\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842364 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-ovndb-tls-certs\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842397 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50b99209-7df9-4ae4-9795-37a362cd6373-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.842414 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-combined-ca-bundle\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.844757 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50b99209-7df9-4ae4-9795-37a362cd6373-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.869017 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.872403 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-ovndb-tls-certs\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.872519 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.872699 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.872908 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-scripts\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.879679 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69zpd\" (UniqueName: \"kubernetes.io/projected/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-kube-api-access-69zpd\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.880910 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-combined-ca-bundle\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.881976 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-config\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.893741 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmgz7\" (UniqueName: \"kubernetes.io/projected/50b99209-7df9-4ae4-9795-37a362cd6373-kube-api-access-rmgz7\") pod \"cinder-scheduler-0\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " pod="openstack/cinder-scheduler-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.895334 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-httpd-config\") pod \"neutron-66db4d95cb-74j4r\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.933701 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.935450 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.940470 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.946582 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.946645 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-svc\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.946671 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.946689 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.946730 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-config\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.946826 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvzbz\" (UniqueName: \"kubernetes.io/projected/79144990-622b-4d1b-8f2d-26707a7a6bd2-kube-api-access-jvzbz\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:28 crc kubenswrapper[4992]: I1211 08:44:28.968643 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.004232 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.052843 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.052920 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data-custom\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.052965 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053016 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvzbz\" (UniqueName: \"kubernetes.io/projected/79144990-622b-4d1b-8f2d-26707a7a6bd2-kube-api-access-jvzbz\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053044 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-scripts\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053098 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26kft\" (UniqueName: \"kubernetes.io/projected/3ec8451b-e888-4b84-8cc3-0185265b8eae-kube-api-access-26kft\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053123 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ec8451b-e888-4b84-8cc3-0185265b8eae-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053174 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053218 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-svc\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053246 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ec8451b-e888-4b84-8cc3-0185265b8eae-logs\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053272 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053299 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.053350 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-config\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.054438 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-config\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.054441 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-svc\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.054646 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.054662 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.054995 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.055103 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.079057 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvzbz\" (UniqueName: \"kubernetes.io/projected/79144990-622b-4d1b-8f2d-26707a7a6bd2-kube-api-access-jvzbz\") pod \"dnsmasq-dns-5784cf869f-wxwh2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.121762 4992 generic.go:334] "Generic (PLEG): container finished" podID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerID="b0b11ff2f87845f030600f4762eb89d38fafa28b3949298e650fb7a79f8f0a0e" exitCode=0 Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.121830 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" event={"ID":"c4783cdc-5222-45f5-b56c-f04c06cf7df7","Type":"ContainerDied","Data":"b0b11ff2f87845f030600f4762eb89d38fafa28b3949298e650fb7a79f8f0a0e"} Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.156300 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26kft\" (UniqueName: \"kubernetes.io/projected/3ec8451b-e888-4b84-8cc3-0185265b8eae-kube-api-access-26kft\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.156404 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ec8451b-e888-4b84-8cc3-0185265b8eae-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.156483 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ec8451b-e888-4b84-8cc3-0185265b8eae-logs\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.156770 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.156858 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data-custom\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.156930 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.157003 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-scripts\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.158391 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ec8451b-e888-4b84-8cc3-0185265b8eae-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.217040 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.234864 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.238318 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data-custom\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.238509 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ec8451b-e888-4b84-8cc3-0185265b8eae-logs\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.246352 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.246707 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-scripts\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.247199 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26kft\" (UniqueName: \"kubernetes.io/projected/3ec8451b-e888-4b84-8cc3-0185265b8eae-kube-api-access-26kft\") pod \"cinder-api-0\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.247560 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-5j6cp"] Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.276995 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d94866685-kpw9g"] Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.289521 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.936266 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.936938 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="ceilometer-central-agent" containerID="cri-o://33a790dfdb73d816a315e25db1506c2e0bc4433cf2a7c44400b12f65f7afafbb" gracePeriod=30 Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.937092 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="proxy-httpd" containerID="cri-o://5648e382dce9e69198a13d156f5f6a3145ca3ea7b7afcbabd905957b155749c9" gracePeriod=30 Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.937152 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="sg-core" containerID="cri-o://e70914344f697e3a0b2023c5175f2a5257551e372d0ab94024320dc6c33e2083" gracePeriod=30 Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.937206 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="ceilometer-notification-agent" containerID="cri-o://366e04509132822b09c09069ef03eec89dbaf165e106c581c1949254fad33501" gracePeriod=30 Dec 11 08:44:29 crc kubenswrapper[4992]: I1211 08:44:29.946437 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.156240 4992 generic.go:334] "Generic (PLEG): container finished" podID="3483a09f-bb6f-470f-9485-3241dd60a448" containerID="e70914344f697e3a0b2023c5175f2a5257551e372d0ab94024320dc6c33e2083" exitCode=2 Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.156294 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerDied","Data":"e70914344f697e3a0b2023c5175f2a5257551e372d0ab94024320dc6c33e2083"} Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.628894 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tnwhj"] Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.633038 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.643352 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tnwhj"] Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.704027 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7m25\" (UniqueName: \"kubernetes.io/projected/c870d81b-61d3-4eb2-b408-ce51fae0e19f-kube-api-access-l7m25\") pod \"nova-api-db-create-tnwhj\" (UID: \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\") " pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.704112 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c870d81b-61d3-4eb2-b408-ce51fae0e19f-operator-scripts\") pod \"nova-api-db-create-tnwhj\" (UID: \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\") " pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.742944 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c9e3-account-create-update-vxlqn"] Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.744436 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.752287 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.763157 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c9e3-account-create-update-vxlqn"] Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.806353 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7m25\" (UniqueName: \"kubernetes.io/projected/c870d81b-61d3-4eb2-b408-ce51fae0e19f-kube-api-access-l7m25\") pod \"nova-api-db-create-tnwhj\" (UID: \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\") " pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.806424 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da60d29-4843-4e25-804b-a5b89de8f2f2-operator-scripts\") pod \"nova-api-c9e3-account-create-update-vxlqn\" (UID: \"1da60d29-4843-4e25-804b-a5b89de8f2f2\") " pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.806478 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c870d81b-61d3-4eb2-b408-ce51fae0e19f-operator-scripts\") pod \"nova-api-db-create-tnwhj\" (UID: \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\") " pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.806644 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtlzh\" (UniqueName: \"kubernetes.io/projected/1da60d29-4843-4e25-804b-a5b89de8f2f2-kube-api-access-gtlzh\") pod \"nova-api-c9e3-account-create-update-vxlqn\" (UID: \"1da60d29-4843-4e25-804b-a5b89de8f2f2\") " pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.808541 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c870d81b-61d3-4eb2-b408-ce51fae0e19f-operator-scripts\") pod \"nova-api-db-create-tnwhj\" (UID: \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\") " pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.835895 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-x24wj"] Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.837308 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.850252 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7m25\" (UniqueName: \"kubernetes.io/projected/c870d81b-61d3-4eb2-b408-ce51fae0e19f-kube-api-access-l7m25\") pod \"nova-api-db-create-tnwhj\" (UID: \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\") " pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.876953 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x24wj"] Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.908647 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88276bfc-171a-4c6d-b2a8-342e9a6f856d-operator-scripts\") pod \"nova-cell0-db-create-x24wj\" (UID: \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\") " pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.908696 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtlzh\" (UniqueName: \"kubernetes.io/projected/1da60d29-4843-4e25-804b-a5b89de8f2f2-kube-api-access-gtlzh\") pod \"nova-api-c9e3-account-create-update-vxlqn\" (UID: \"1da60d29-4843-4e25-804b-a5b89de8f2f2\") " pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.908775 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da60d29-4843-4e25-804b-a5b89de8f2f2-operator-scripts\") pod \"nova-api-c9e3-account-create-update-vxlqn\" (UID: \"1da60d29-4843-4e25-804b-a5b89de8f2f2\") " pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.908838 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5947k\" (UniqueName: \"kubernetes.io/projected/88276bfc-171a-4c6d-b2a8-342e9a6f856d-kube-api-access-5947k\") pod \"nova-cell0-db-create-x24wj\" (UID: \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\") " pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.910029 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da60d29-4843-4e25-804b-a5b89de8f2f2-operator-scripts\") pod \"nova-api-c9e3-account-create-update-vxlqn\" (UID: \"1da60d29-4843-4e25-804b-a5b89de8f2f2\") " pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.946462 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtlzh\" (UniqueName: \"kubernetes.io/projected/1da60d29-4843-4e25-804b-a5b89de8f2f2-kube-api-access-gtlzh\") pod \"nova-api-c9e3-account-create-update-vxlqn\" (UID: \"1da60d29-4843-4e25-804b-a5b89de8f2f2\") " pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.946542 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jqlwz"] Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.948014 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.957625 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.963712 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c59a-account-create-update-db54s"] Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.966032 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.968295 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 11 08:44:30 crc kubenswrapper[4992]: I1211 08:44:30.988560 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jqlwz"] Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.010989 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6cbw\" (UniqueName: \"kubernetes.io/projected/953feb5a-bed1-4457-b25e-fa4716bbad75-kube-api-access-c6cbw\") pod \"nova-cell1-db-create-jqlwz\" (UID: \"953feb5a-bed1-4457-b25e-fa4716bbad75\") " pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.016595 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88276bfc-171a-4c6d-b2a8-342e9a6f856d-operator-scripts\") pod \"nova-cell0-db-create-x24wj\" (UID: \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\") " pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.016843 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953feb5a-bed1-4457-b25e-fa4716bbad75-operator-scripts\") pod \"nova-cell1-db-create-jqlwz\" (UID: \"953feb5a-bed1-4457-b25e-fa4716bbad75\") " pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.016883 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5947k\" (UniqueName: \"kubernetes.io/projected/88276bfc-171a-4c6d-b2a8-342e9a6f856d-kube-api-access-5947k\") pod \"nova-cell0-db-create-x24wj\" (UID: \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\") " pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.017731 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c59a-account-create-update-db54s"] Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.024744 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88276bfc-171a-4c6d-b2a8-342e9a6f856d-operator-scripts\") pod \"nova-cell0-db-create-x24wj\" (UID: \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\") " pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.053835 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5947k\" (UniqueName: \"kubernetes.io/projected/88276bfc-171a-4c6d-b2a8-342e9a6f856d-kube-api-access-5947k\") pod \"nova-cell0-db-create-x24wj\" (UID: \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\") " pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.060902 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.119018 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953feb5a-bed1-4457-b25e-fa4716bbad75-operator-scripts\") pod \"nova-cell1-db-create-jqlwz\" (UID: \"953feb5a-bed1-4457-b25e-fa4716bbad75\") " pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.119104 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-operator-scripts\") pod \"nova-cell0-c59a-account-create-update-db54s\" (UID: \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\") " pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.119177 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6cbw\" (UniqueName: \"kubernetes.io/projected/953feb5a-bed1-4457-b25e-fa4716bbad75-kube-api-access-c6cbw\") pod \"nova-cell1-db-create-jqlwz\" (UID: \"953feb5a-bed1-4457-b25e-fa4716bbad75\") " pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.119267 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwffd\" (UniqueName: \"kubernetes.io/projected/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-kube-api-access-hwffd\") pod \"nova-cell0-c59a-account-create-update-db54s\" (UID: \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\") " pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.120151 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953feb5a-bed1-4457-b25e-fa4716bbad75-operator-scripts\") pod \"nova-cell1-db-create-jqlwz\" (UID: \"953feb5a-bed1-4457-b25e-fa4716bbad75\") " pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.147185 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6cbw\" (UniqueName: \"kubernetes.io/projected/953feb5a-bed1-4457-b25e-fa4716bbad75-kube-api-access-c6cbw\") pod \"nova-cell1-db-create-jqlwz\" (UID: \"953feb5a-bed1-4457-b25e-fa4716bbad75\") " pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.149342 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-580c-account-create-update-5jxfz"] Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.151022 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.157184 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.189665 4992 generic.go:334] "Generic (PLEG): container finished" podID="3483a09f-bb6f-470f-9485-3241dd60a448" containerID="5648e382dce9e69198a13d156f5f6a3145ca3ea7b7afcbabd905957b155749c9" exitCode=0 Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.189813 4992 generic.go:334] "Generic (PLEG): container finished" podID="3483a09f-bb6f-470f-9485-3241dd60a448" containerID="33a790dfdb73d816a315e25db1506c2e0bc4433cf2a7c44400b12f65f7afafbb" exitCode=0 Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.189726 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerDied","Data":"5648e382dce9e69198a13d156f5f6a3145ca3ea7b7afcbabd905957b155749c9"} Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.189850 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerDied","Data":"33a790dfdb73d816a315e25db1506c2e0bc4433cf2a7c44400b12f65f7afafbb"} Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.194745 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-580c-account-create-update-5jxfz"] Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.216288 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.224661 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-operator-scripts\") pod \"nova-cell1-580c-account-create-update-5jxfz\" (UID: \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\") " pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.224702 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-operator-scripts\") pod \"nova-cell0-c59a-account-create-update-db54s\" (UID: \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\") " pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.224845 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwffd\" (UniqueName: \"kubernetes.io/projected/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-kube-api-access-hwffd\") pod \"nova-cell0-c59a-account-create-update-db54s\" (UID: \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\") " pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.224884 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6pd2\" (UniqueName: \"kubernetes.io/projected/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-kube-api-access-p6pd2\") pod \"nova-cell1-580c-account-create-update-5jxfz\" (UID: \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\") " pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.225442 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-operator-scripts\") pod \"nova-cell0-c59a-account-create-update-db54s\" (UID: \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\") " pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.267877 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwffd\" (UniqueName: \"kubernetes.io/projected/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-kube-api-access-hwffd\") pod \"nova-cell0-c59a-account-create-update-db54s\" (UID: \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\") " pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.318762 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.326450 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-operator-scripts\") pod \"nova-cell1-580c-account-create-update-5jxfz\" (UID: \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\") " pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.326574 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6pd2\" (UniqueName: \"kubernetes.io/projected/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-kube-api-access-p6pd2\") pod \"nova-cell1-580c-account-create-update-5jxfz\" (UID: \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\") " pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.327683 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-operator-scripts\") pod \"nova-cell1-580c-account-create-update-5jxfz\" (UID: \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\") " pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.331059 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.343381 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6pd2\" (UniqueName: \"kubernetes.io/projected/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-kube-api-access-p6pd2\") pod \"nova-cell1-580c-account-create-update-5jxfz\" (UID: \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\") " pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:44:31 crc kubenswrapper[4992]: I1211 08:44:31.514993 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:44:32 crc kubenswrapper[4992]: I1211 08:44:32.030245 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 08:44:33 crc kubenswrapper[4992]: I1211 08:44:33.206957 4992 generic.go:334] "Generic (PLEG): container finished" podID="3483a09f-bb6f-470f-9485-3241dd60a448" containerID="366e04509132822b09c09069ef03eec89dbaf165e106c581c1949254fad33501" exitCode=0 Dec 11 08:44:33 crc kubenswrapper[4992]: I1211 08:44:33.207109 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerDied","Data":"366e04509132822b09c09069ef03eec89dbaf165e106c581c1949254fad33501"} Dec 11 08:44:33 crc kubenswrapper[4992]: I1211 08:44:33.414062 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:33 crc kubenswrapper[4992]: I1211 08:44:33.516973 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.279335 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7978c485bf-hpg7n"] Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.281376 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.292944 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.293155 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.307891 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7978c485bf-hpg7n"] Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.383871 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-public-tls-certs\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.383932 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-httpd-config\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.384037 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-internal-tls-certs\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.384064 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-config\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.384153 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-ovndb-tls-certs\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.384240 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-combined-ca-bundle\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.384266 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g9mx\" (UniqueName: \"kubernetes.io/projected/04b4ce41-af3f-42d1-a340-e3d20519f217-kube-api-access-9g9mx\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.487907 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-internal-tls-certs\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.487957 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-config\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.488087 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-ovndb-tls-certs\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.488166 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-combined-ca-bundle\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.488188 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g9mx\" (UniqueName: \"kubernetes.io/projected/04b4ce41-af3f-42d1-a340-e3d20519f217-kube-api-access-9g9mx\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.488223 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-public-tls-certs\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.488241 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-httpd-config\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.501630 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-config\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.504391 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-httpd-config\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.505001 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-internal-tls-certs\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.506910 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-ovndb-tls-certs\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.508812 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g9mx\" (UniqueName: \"kubernetes.io/projected/04b4ce41-af3f-42d1-a340-e3d20519f217-kube-api-access-9g9mx\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.540562 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-public-tls-certs\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.543517 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b4ce41-af3f-42d1-a340-e3d20519f217-combined-ca-bundle\") pod \"neutron-7978c485bf-hpg7n\" (UID: \"04b4ce41-af3f-42d1-a340-e3d20519f217\") " pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:34 crc kubenswrapper[4992]: I1211 08:44:34.607066 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:36 crc kubenswrapper[4992]: I1211 08:44:36.041182 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:36 crc kubenswrapper[4992]: I1211 08:44:36.286243 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-859586f498-26phb" Dec 11 08:44:36 crc kubenswrapper[4992]: I1211 08:44:36.358660 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6886c495d8-l2qjd"] Dec 11 08:44:36 crc kubenswrapper[4992]: I1211 08:44:36.358939 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6886c495d8-l2qjd" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api-log" containerID="cri-o://27b729564b62fed56080caa742d27b5fad8943e96075bcc943cbb104188cbf18" gracePeriod=30 Dec 11 08:44:36 crc kubenswrapper[4992]: I1211 08:44:36.359095 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6886c495d8-l2qjd" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api" containerID="cri-o://128fa2ace3ab60ab82e383dd1c08edfea8b03168fa89784e7928b1bb654e9f64" gracePeriod=30 Dec 11 08:44:36 crc kubenswrapper[4992]: I1211 08:44:36.540825 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: i/o timeout" Dec 11 08:44:37 crc kubenswrapper[4992]: I1211 08:44:37.140504 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.151:3000/\": dial tcp 10.217.0.151:3000: connect: connection refused" Dec 11 08:44:38 crc kubenswrapper[4992]: I1211 08:44:38.268179 4992 generic.go:334] "Generic (PLEG): container finished" podID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerID="27b729564b62fed56080caa742d27b5fad8943e96075bcc943cbb104188cbf18" exitCode=143 Dec 11 08:44:38 crc kubenswrapper[4992]: I1211 08:44:38.268265 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6886c495d8-l2qjd" event={"ID":"6ccc6dfb-e898-4b91-b5ff-4d33ede78901","Type":"ContainerDied","Data":"27b729564b62fed56080caa742d27b5fad8943e96075bcc943cbb104188cbf18"} Dec 11 08:44:39 crc kubenswrapper[4992]: I1211 08:44:39.501523 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6886c495d8-l2qjd" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:60948->10.217.0.157:9311: read: connection reset by peer" Dec 11 08:44:39 crc kubenswrapper[4992]: I1211 08:44:39.501591 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6886c495d8-l2qjd" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:60958->10.217.0.157:9311: read: connection reset by peer" Dec 11 08:44:40 crc kubenswrapper[4992]: I1211 08:44:40.289544 4992 generic.go:334] "Generic (PLEG): container finished" podID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerID="128fa2ace3ab60ab82e383dd1c08edfea8b03168fa89784e7928b1bb654e9f64" exitCode=0 Dec 11 08:44:40 crc kubenswrapper[4992]: I1211 08:44:40.289641 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6886c495d8-l2qjd" event={"ID":"6ccc6dfb-e898-4b91-b5ff-4d33ede78901","Type":"ContainerDied","Data":"128fa2ace3ab60ab82e383dd1c08edfea8b03168fa89784e7928b1bb654e9f64"} Dec 11 08:44:41 crc kubenswrapper[4992]: I1211 08:44:41.541093 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: i/o timeout" Dec 11 08:44:41 crc kubenswrapper[4992]: I1211 08:44:41.671603 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6886c495d8-l2qjd" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Dec 11 08:44:41 crc kubenswrapper[4992]: I1211 08:44:41.671734 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6886c495d8-l2qjd" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Dec 11 08:44:46 crc kubenswrapper[4992]: I1211 08:44:46.542156 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: i/o timeout" Dec 11 08:44:46 crc kubenswrapper[4992]: I1211 08:44:46.671939 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6886c495d8-l2qjd" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Dec 11 08:44:46 crc kubenswrapper[4992]: I1211 08:44:46.672043 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:46 crc kubenswrapper[4992]: I1211 08:44:46.672013 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6886c495d8-l2qjd" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": dial tcp 10.217.0.157:9311: connect: connection refused" Dec 11 08:44:46 crc kubenswrapper[4992]: I1211 08:44:46.672155 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:50 crc kubenswrapper[4992]: I1211 08:44:50.858833 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:50 crc kubenswrapper[4992]: I1211 08:44:50.861247 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74db564d44-wj6gh" Dec 11 08:44:50 crc kubenswrapper[4992]: E1211 08:44:50.875961 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Dec 11 08:44:50 crc kubenswrapper[4992]: E1211 08:44:50.876498 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n96h676h557hbdh665h667h655h566h54dh6bh56bh598h5d7h587h5f4h678h6fh648hb5h64ch68ch5f6hb4h654h568hfbh7hcdh6bh648h699h5fcq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvsrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(421fdf51-5a39-4d80-b066-a715006c2f85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 08:44:50 crc kubenswrapper[4992]: E1211 08:44:50.878516 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="421fdf51-5a39-4d80-b066-a715006c2f85" Dec 11 08:44:50 crc kubenswrapper[4992]: I1211 08:44:50.923543 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.054035 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-config\") pod \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.054142 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-sb\") pod \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.054184 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-svc\") pod \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.054243 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-swift-storage-0\") pod \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.054340 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-nb\") pod \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.054377 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wsjj\" (UniqueName: \"kubernetes.io/projected/c4783cdc-5222-45f5-b56c-f04c06cf7df7-kube-api-access-7wsjj\") pod \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\" (UID: \"c4783cdc-5222-45f5-b56c-f04c06cf7df7\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.065654 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4783cdc-5222-45f5-b56c-f04c06cf7df7-kube-api-access-7wsjj" (OuterVolumeSpecName: "kube-api-access-7wsjj") pod "c4783cdc-5222-45f5-b56c-f04c06cf7df7" (UID: "c4783cdc-5222-45f5-b56c-f04c06cf7df7"). InnerVolumeSpecName "kube-api-access-7wsjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.160404 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wsjj\" (UniqueName: \"kubernetes.io/projected/c4783cdc-5222-45f5-b56c-f04c06cf7df7-kube-api-access-7wsjj\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.221531 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4783cdc-5222-45f5-b56c-f04c06cf7df7" (UID: "c4783cdc-5222-45f5-b56c-f04c06cf7df7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.229546 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c4783cdc-5222-45f5-b56c-f04c06cf7df7" (UID: "c4783cdc-5222-45f5-b56c-f04c06cf7df7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.233412 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4783cdc-5222-45f5-b56c-f04c06cf7df7" (UID: "c4783cdc-5222-45f5-b56c-f04c06cf7df7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.262435 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.262469 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.262485 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.269430 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-config" (OuterVolumeSpecName: "config") pod "c4783cdc-5222-45f5-b56c-f04c06cf7df7" (UID: "c4783cdc-5222-45f5-b56c-f04c06cf7df7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.308717 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4783cdc-5222-45f5-b56c-f04c06cf7df7" (UID: "c4783cdc-5222-45f5-b56c-f04c06cf7df7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.364592 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.364646 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4783cdc-5222-45f5-b56c-f04c06cf7df7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.423562 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" event={"ID":"4e585d4a-7315-4ad3-a670-4b7eff004054","Type":"ContainerStarted","Data":"624274ffec2a792a6f88260c0348be5f19cc683a980e5009af525ac31e2c2361"} Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.427064 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" event={"ID":"c4783cdc-5222-45f5-b56c-f04c06cf7df7","Type":"ContainerDied","Data":"b293007af51f2e848a23bd23e603fe07d8613cda03fee73433c148dd59f63e3c"} Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.427135 4992 scope.go:117] "RemoveContainer" containerID="b0b11ff2f87845f030600f4762eb89d38fafa28b3949298e650fb7a79f8f0a0e" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.427296 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.432273 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d94866685-kpw9g" event={"ID":"f10bd5d3-8ab6-4950-96e9-b683e47619ea","Type":"ContainerStarted","Data":"9b7b463bc1beeb017d22e00cdb898374cedde96250f16e8b44cd452e7799da1e"} Dec 11 08:44:51 crc kubenswrapper[4992]: E1211 08:44:51.461575 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="421fdf51-5a39-4d80-b066-a715006c2f85" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.485554 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.543265 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59d5ff467f-c5f6c" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: i/o timeout" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.543863 4992 scope.go:117] "RemoveContainer" containerID="70d24ddea37ae82981754533a8c4fe993aa2452f49bbfb7966caf8de5527010d" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.553054 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.568982 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-sg-core-conf-yaml\") pod \"3483a09f-bb6f-470f-9485-3241dd60a448\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.569040 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgl6v\" (UniqueName: \"kubernetes.io/projected/3483a09f-bb6f-470f-9485-3241dd60a448-kube-api-access-vgl6v\") pod \"3483a09f-bb6f-470f-9485-3241dd60a448\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.569068 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-run-httpd\") pod \"3483a09f-bb6f-470f-9485-3241dd60a448\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.569134 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-log-httpd\") pod \"3483a09f-bb6f-470f-9485-3241dd60a448\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.569213 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-scripts\") pod \"3483a09f-bb6f-470f-9485-3241dd60a448\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.569285 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-config-data\") pod \"3483a09f-bb6f-470f-9485-3241dd60a448\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.569304 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-combined-ca-bundle\") pod \"3483a09f-bb6f-470f-9485-3241dd60a448\" (UID: \"3483a09f-bb6f-470f-9485-3241dd60a448\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.570574 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3483a09f-bb6f-470f-9485-3241dd60a448" (UID: "3483a09f-bb6f-470f-9485-3241dd60a448"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.570946 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3483a09f-bb6f-470f-9485-3241dd60a448" (UID: "3483a09f-bb6f-470f-9485-3241dd60a448"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.594249 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3483a09f-bb6f-470f-9485-3241dd60a448-kube-api-access-vgl6v" (OuterVolumeSpecName: "kube-api-access-vgl6v") pod "3483a09f-bb6f-470f-9485-3241dd60a448" (UID: "3483a09f-bb6f-470f-9485-3241dd60a448"). InnerVolumeSpecName "kube-api-access-vgl6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.595789 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-scripts" (OuterVolumeSpecName: "scripts") pod "3483a09f-bb6f-470f-9485-3241dd60a448" (UID: "3483a09f-bb6f-470f-9485-3241dd60a448"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.618056 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-c5f6c"] Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.635256 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-c5f6c"] Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.672407 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-logs\") pod \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.672474 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data-custom\") pod \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.672712 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data\") pod \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.674743 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kk59\" (UniqueName: \"kubernetes.io/projected/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-kube-api-access-6kk59\") pod \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.674850 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-combined-ca-bundle\") pod \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\" (UID: \"6ccc6dfb-e898-4b91-b5ff-4d33ede78901\") " Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.675208 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-logs" (OuterVolumeSpecName: "logs") pod "6ccc6dfb-e898-4b91-b5ff-4d33ede78901" (UID: "6ccc6dfb-e898-4b91-b5ff-4d33ede78901"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.675777 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.675802 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgl6v\" (UniqueName: \"kubernetes.io/projected/3483a09f-bb6f-470f-9485-3241dd60a448-kube-api-access-vgl6v\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.675813 4992 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.675824 4992 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3483a09f-bb6f-470f-9485-3241dd60a448-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.675832 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.726805 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6ccc6dfb-e898-4b91-b5ff-4d33ede78901" (UID: "6ccc6dfb-e898-4b91-b5ff-4d33ede78901"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: W1211 08:44:51.738357 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88276bfc_171a_4c6d_b2a8_342e9a6f856d.slice/crio-17cf91bc7a6a8b3c199ece4c33439b9721f42af25ec643fe37c36c33e366036d WatchSource:0}: Error finding container 17cf91bc7a6a8b3c199ece4c33439b9721f42af25ec643fe37c36c33e366036d: Status 404 returned error can't find the container with id 17cf91bc7a6a8b3c199ece4c33439b9721f42af25ec643fe37c36c33e366036d Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.746199 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-x24wj"] Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.765652 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-kube-api-access-6kk59" (OuterVolumeSpecName: "kube-api-access-6kk59") pod "6ccc6dfb-e898-4b91-b5ff-4d33ede78901" (UID: "6ccc6dfb-e898-4b91-b5ff-4d33ede78901"). InnerVolumeSpecName "kube-api-access-6kk59". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.778249 4992 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.778289 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kk59\" (UniqueName: \"kubernetes.io/projected/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-kube-api-access-6kk59\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.840243 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3483a09f-bb6f-470f-9485-3241dd60a448" (UID: "3483a09f-bb6f-470f-9485-3241dd60a448"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.880693 4992 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.931782 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ccc6dfb-e898-4b91-b5ff-4d33ede78901" (UID: "6ccc6dfb-e898-4b91-b5ff-4d33ede78901"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.982471 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.989986 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c9e3-account-create-update-vxlqn"] Dec 11 08:44:51 crc kubenswrapper[4992]: I1211 08:44:51.996764 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-config-data" (OuterVolumeSpecName: "config-data") pod "3483a09f-bb6f-470f-9485-3241dd60a448" (UID: "3483a09f-bb6f-470f-9485-3241dd60a448"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.007406 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data" (OuterVolumeSpecName: "config-data") pod "6ccc6dfb-e898-4b91-b5ff-4d33ede78901" (UID: "6ccc6dfb-e898-4b91-b5ff-4d33ede78901"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.015868 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3483a09f-bb6f-470f-9485-3241dd60a448" (UID: "3483a09f-bb6f-470f-9485-3241dd60a448"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:44:52 crc kubenswrapper[4992]: W1211 08:44:52.017707 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1da60d29_4843_4e25_804b_a5b89de8f2f2.slice/crio-1c6ae15fe34579fe77c9265ba119be08c0dd8b714879fab34bf2aa20efb0651a WatchSource:0}: Error finding container 1c6ae15fe34579fe77c9265ba119be08c0dd8b714879fab34bf2aa20efb0651a: Status 404 returned error can't find the container with id 1c6ae15fe34579fe77c9265ba119be08c0dd8b714879fab34bf2aa20efb0651a Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.083797 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.084050 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3483a09f-bb6f-470f-9485-3241dd60a448-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.084060 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ccc6dfb-e898-4b91-b5ff-4d33ede78901-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.114854 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" path="/var/lib/kubelet/pods/c4783cdc-5222-45f5-b56c-f04c06cf7df7/volumes" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.342088 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-wxwh2"] Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.350063 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-580c-account-create-update-5jxfz"] Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.358010 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 08:44:52 crc kubenswrapper[4992]: W1211 08:44:52.399954 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79144990_622b_4d1b_8f2d_26707a7a6bd2.slice/crio-e2ab64e57c7e8b1e17272870889a57500262f9cefba63ed8c5a08ff944ee0d19 WatchSource:0}: Error finding container e2ab64e57c7e8b1e17272870889a57500262f9cefba63ed8c5a08ff944ee0d19: Status 404 returned error can't find the container with id e2ab64e57c7e8b1e17272870889a57500262f9cefba63ed8c5a08ff944ee0d19 Dec 11 08:44:52 crc kubenswrapper[4992]: W1211 08:44:52.408477 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50b99209_7df9_4ae4_9795_37a362cd6373.slice/crio-7d2867949cdd2c4b986ad6a52524094a51b8161d66fba4443e3a4c29dd00b9cd WatchSource:0}: Error finding container 7d2867949cdd2c4b986ad6a52524094a51b8161d66fba4443e3a4c29dd00b9cd: Status 404 returned error can't find the container with id 7d2867949cdd2c4b986ad6a52524094a51b8161d66fba4443e3a4c29dd00b9cd Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.435218 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tnwhj"] Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.447830 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-580c-account-create-update-5jxfz" event={"ID":"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd","Type":"ContainerStarted","Data":"5c4d2a3c51797b94a98a4324eee92d0f8aed8be4ce09156a74dd774cde82efe2"} Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.448782 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50b99209-7df9-4ae4-9795-37a362cd6373","Type":"ContainerStarted","Data":"7d2867949cdd2c4b986ad6a52524094a51b8161d66fba4443e3a4c29dd00b9cd"} Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.452036 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c9e3-account-create-update-vxlqn" event={"ID":"1da60d29-4843-4e25-804b-a5b89de8f2f2","Type":"ContainerStarted","Data":"1c6ae15fe34579fe77c9265ba119be08c0dd8b714879fab34bf2aa20efb0651a"} Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.458224 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6886c495d8-l2qjd" event={"ID":"6ccc6dfb-e898-4b91-b5ff-4d33ede78901","Type":"ContainerDied","Data":"1bbb242ce2be8fade241362640d7c3ee0cbf72f333b48228a322a7ad72568aac"} Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.458262 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6886c495d8-l2qjd" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.458337 4992 scope.go:117] "RemoveContainer" containerID="128fa2ace3ab60ab82e383dd1c08edfea8b03168fa89784e7928b1bb654e9f64" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.478018 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x24wj" event={"ID":"88276bfc-171a-4c6d-b2a8-342e9a6f856d","Type":"ContainerStarted","Data":"17cf91bc7a6a8b3c199ece4c33439b9721f42af25ec643fe37c36c33e366036d"} Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.481456 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" event={"ID":"79144990-622b-4d1b-8f2d-26707a7a6bd2","Type":"ContainerStarted","Data":"e2ab64e57c7e8b1e17272870889a57500262f9cefba63ed8c5a08ff944ee0d19"} Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.484808 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3483a09f-bb6f-470f-9485-3241dd60a448","Type":"ContainerDied","Data":"e674f6911babdec5dbb64c46e01fe5cdc3db0235aaa1844b24653a6ed681520b"} Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.484901 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.496736 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.527929 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6886c495d8-l2qjd"] Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.540547 4992 scope.go:117] "RemoveContainer" containerID="27b729564b62fed56080caa742d27b5fad8943e96075bcc943cbb104188cbf18" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.556774 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6886c495d8-l2qjd"] Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.567440 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.596877 4992 scope.go:117] "RemoveContainer" containerID="5648e382dce9e69198a13d156f5f6a3145ca3ea7b7afcbabd905957b155749c9" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.633001 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.647489 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:52 crc kubenswrapper[4992]: E1211 08:44:52.648480 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.648509 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api" Dec 11 08:44:52 crc kubenswrapper[4992]: E1211 08:44:52.648591 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="sg-core" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.648601 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="sg-core" Dec 11 08:44:52 crc kubenswrapper[4992]: E1211 08:44:52.648622 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="init" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.648692 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="init" Dec 11 08:44:52 crc kubenswrapper[4992]: E1211 08:44:52.648752 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="ceilometer-central-agent" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.648778 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="ceilometer-central-agent" Dec 11 08:44:52 crc kubenswrapper[4992]: E1211 08:44:52.648800 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="proxy-httpd" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.648810 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="proxy-httpd" Dec 11 08:44:52 crc kubenswrapper[4992]: E1211 08:44:52.648823 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="dnsmasq-dns" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.648830 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="dnsmasq-dns" Dec 11 08:44:52 crc kubenswrapper[4992]: E1211 08:44:52.648842 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="ceilometer-notification-agent" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.648850 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="ceilometer-notification-agent" Dec 11 08:44:52 crc kubenswrapper[4992]: E1211 08:44:52.648865 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api-log" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.648873 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api-log" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.649143 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.650471 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" containerName="barbican-api-log" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.650513 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="ceilometer-notification-agent" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.650528 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4783cdc-5222-45f5-b56c-f04c06cf7df7" containerName="dnsmasq-dns" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.650542 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="proxy-httpd" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.650555 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="sg-core" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.650582 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" containerName="ceilometer-central-agent" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.652892 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.658890 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.658911 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.660738 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.779017 4992 scope.go:117] "RemoveContainer" containerID="e70914344f697e3a0b2023c5175f2a5257551e372d0ab94024320dc6c33e2083" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.802707 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.802760 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-config-data\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.802895 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-log-httpd\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.802924 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84hzh\" (UniqueName: \"kubernetes.io/projected/0551f53f-db50-4bbe-9cce-d605d03bd91f-kube-api-access-84hzh\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.802953 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-run-httpd\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.802968 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.803011 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-scripts\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.809337 4992 scope.go:117] "RemoveContainer" containerID="366e04509132822b09c09069ef03eec89dbaf165e106c581c1949254fad33501" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.844868 4992 scope.go:117] "RemoveContainer" containerID="33a790dfdb73d816a315e25db1506c2e0bc4433cf2a7c44400b12f65f7afafbb" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.904379 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-scripts\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.904468 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.904499 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-config-data\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.904563 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-log-httpd\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.904585 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84hzh\" (UniqueName: \"kubernetes.io/projected/0551f53f-db50-4bbe-9cce-d605d03bd91f-kube-api-access-84hzh\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.904614 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-run-httpd\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.904644 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.905392 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-log-httpd\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.905533 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-run-httpd\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.909899 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.922778 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-scripts\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.927239 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.928333 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-config-data\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:52 crc kubenswrapper[4992]: I1211 08:44:52.931276 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84hzh\" (UniqueName: \"kubernetes.io/projected/0551f53f-db50-4bbe-9cce-d605d03bd91f-kube-api-access-84hzh\") pod \"ceilometer-0\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " pod="openstack/ceilometer-0" Dec 11 08:44:53 crc kubenswrapper[4992]: I1211 08:44:53.090206 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:44:53 crc kubenswrapper[4992]: I1211 08:44:53.498314 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3ec8451b-e888-4b84-8cc3-0185265b8eae","Type":"ContainerStarted","Data":"7d3cd0d2f14214c5f870645dec304f1010cc771d9eccf1c40e4268796dde2021"} Dec 11 08:44:53 crc kubenswrapper[4992]: I1211 08:44:53.501136 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tnwhj" event={"ID":"c870d81b-61d3-4eb2-b408-ce51fae0e19f","Type":"ContainerStarted","Data":"e0834fae258dc7123b4620c7f7985ab7937a5b9e58d0b5b356ab9b9d6b1d409b"} Dec 11 08:44:53 crc kubenswrapper[4992]: I1211 08:44:53.535796 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:44:53 crc kubenswrapper[4992]: W1211 08:44:53.537060 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0551f53f_db50_4bbe_9cce_d605d03bd91f.slice/crio-17c40888257c9c2b920e57634ed0522331e6cf5d56632ae5407a33fb3b6e5f9e WatchSource:0}: Error finding container 17c40888257c9c2b920e57634ed0522331e6cf5d56632ae5407a33fb3b6e5f9e: Status 404 returned error can't find the container with id 17c40888257c9c2b920e57634ed0522331e6cf5d56632ae5407a33fb3b6e5f9e Dec 11 08:44:54 crc kubenswrapper[4992]: I1211 08:44:54.116533 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3483a09f-bb6f-470f-9485-3241dd60a448" path="/var/lib/kubelet/pods/3483a09f-bb6f-470f-9485-3241dd60a448/volumes" Dec 11 08:44:54 crc kubenswrapper[4992]: I1211 08:44:54.118092 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ccc6dfb-e898-4b91-b5ff-4d33ede78901" path="/var/lib/kubelet/pods/6ccc6dfb-e898-4b91-b5ff-4d33ede78901/volumes" Dec 11 08:44:54 crc kubenswrapper[4992]: I1211 08:44:54.517650 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerStarted","Data":"17c40888257c9c2b920e57634ed0522331e6cf5d56632ae5407a33fb3b6e5f9e"} Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.016069 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c59a-account-create-update-db54s"] Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.024251 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jqlwz"] Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.128622 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66db4d95cb-74j4r"] Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.315713 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7978c485bf-hpg7n"] Dec 11 08:44:56 crc kubenswrapper[4992]: W1211 08:44:56.341961 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04b4ce41_af3f_42d1_a340_e3d20519f217.slice/crio-f1ce077aa7d878d949df905fe14128bfcbd2f8d58e8923550e6856ac186cd5b0 WatchSource:0}: Error finding container f1ce077aa7d878d949df905fe14128bfcbd2f8d58e8923550e6856ac186cd5b0: Status 404 returned error can't find the container with id f1ce077aa7d878d949df905fe14128bfcbd2f8d58e8923550e6856ac186cd5b0 Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.535000 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66db4d95cb-74j4r" event={"ID":"9d96d34a-cfab-41e2-b77e-679b9a0a8a23","Type":"ContainerStarted","Data":"c04db233213d565380b05aa93120b93a50f3ba6717150c07137fc480e0994e91"} Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.537618 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c9e3-account-create-update-vxlqn" event={"ID":"1da60d29-4843-4e25-804b-a5b89de8f2f2","Type":"ContainerStarted","Data":"3f03e400a7702730c5b7dd5c9dde3e2999049e173690633185b9326c64efa105"} Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.538868 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jqlwz" event={"ID":"953feb5a-bed1-4457-b25e-fa4716bbad75","Type":"ContainerStarted","Data":"c859c97111a8e760dbc825bb5b9533288ec5b3a1be519252f0abe09f011c6474"} Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.540575 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d94866685-kpw9g" event={"ID":"f10bd5d3-8ab6-4950-96e9-b683e47619ea","Type":"ContainerStarted","Data":"17c40ed1ef3e7eb9f19dec6d62fe9b89dc2080aeacbc1ee1d2f879949903a815"} Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.541659 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7978c485bf-hpg7n" event={"ID":"04b4ce41-af3f-42d1-a340-e3d20519f217","Type":"ContainerStarted","Data":"f1ce077aa7d878d949df905fe14128bfcbd2f8d58e8923550e6856ac186cd5b0"} Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.542814 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c59a-account-create-update-db54s" event={"ID":"0b8792cf-dd70-4b14-b007-9e6ed8632bd6","Type":"ContainerStarted","Data":"0183cae01a8f27a0686d25bf3a6cb2c0fab5ba9d92e34657ebe11a53bcb32d18"} Dec 11 08:44:56 crc kubenswrapper[4992]: I1211 08:44:56.544585 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" event={"ID":"4e585d4a-7315-4ad3-a670-4b7eff004054","Type":"ContainerStarted","Data":"7acdea636e592f5abbc1396d056cd6666aeca73c74da2ff73d9b058685c46b6a"} Dec 11 08:44:57 crc kubenswrapper[4992]: I1211 08:44:57.554627 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x24wj" event={"ID":"88276bfc-171a-4c6d-b2a8-342e9a6f856d","Type":"ContainerStarted","Data":"32c7226013f1e776450786bea9294c2b1844b1f8af8fd9332996ee055cfbaac0"} Dec 11 08:44:57 crc kubenswrapper[4992]: I1211 08:44:57.557014 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" event={"ID":"79144990-622b-4d1b-8f2d-26707a7a6bd2","Type":"ContainerStarted","Data":"ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c"} Dec 11 08:44:57 crc kubenswrapper[4992]: I1211 08:44:57.558456 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3ec8451b-e888-4b84-8cc3-0185265b8eae","Type":"ContainerStarted","Data":"108bff959be231fc9046f279579da2c6230602bf6a574f77927c75176cbee35b"} Dec 11 08:44:57 crc kubenswrapper[4992]: I1211 08:44:57.559980 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-580c-account-create-update-5jxfz" event={"ID":"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd","Type":"ContainerStarted","Data":"a2ddd2ace3bdbaabf11623749930b58193de83e7a692b0fdb1b7bf1c165d487d"} Dec 11 08:44:57 crc kubenswrapper[4992]: I1211 08:44:57.561282 4992 generic.go:334] "Generic (PLEG): container finished" podID="4e585d4a-7315-4ad3-a670-4b7eff004054" containerID="7acdea636e592f5abbc1396d056cd6666aeca73c74da2ff73d9b058685c46b6a" exitCode=0 Dec 11 08:44:57 crc kubenswrapper[4992]: I1211 08:44:57.561340 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" event={"ID":"4e585d4a-7315-4ad3-a670-4b7eff004054","Type":"ContainerDied","Data":"7acdea636e592f5abbc1396d056cd6666aeca73c74da2ff73d9b058685c46b6a"} Dec 11 08:44:57 crc kubenswrapper[4992]: I1211 08:44:57.563448 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tnwhj" event={"ID":"c870d81b-61d3-4eb2-b408-ce51fae0e19f","Type":"ContainerStarted","Data":"713036c0ac8c668432994f997bd6b71000fd453e2752a8bed220c38b35a448f8"} Dec 11 08:44:57 crc kubenswrapper[4992]: I1211 08:44:57.941536 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.047410 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-nb\") pod \"4e585d4a-7315-4ad3-a670-4b7eff004054\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.047714 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-svc\") pod \"4e585d4a-7315-4ad3-a670-4b7eff004054\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.047738 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vsq5\" (UniqueName: \"kubernetes.io/projected/4e585d4a-7315-4ad3-a670-4b7eff004054-kube-api-access-2vsq5\") pod \"4e585d4a-7315-4ad3-a670-4b7eff004054\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.047757 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-sb\") pod \"4e585d4a-7315-4ad3-a670-4b7eff004054\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.048290 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-swift-storage-0\") pod \"4e585d4a-7315-4ad3-a670-4b7eff004054\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.048413 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-config\") pod \"4e585d4a-7315-4ad3-a670-4b7eff004054\" (UID: \"4e585d4a-7315-4ad3-a670-4b7eff004054\") " Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.054101 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e585d4a-7315-4ad3-a670-4b7eff004054-kube-api-access-2vsq5" (OuterVolumeSpecName: "kube-api-access-2vsq5") pod "4e585d4a-7315-4ad3-a670-4b7eff004054" (UID: "4e585d4a-7315-4ad3-a670-4b7eff004054"). InnerVolumeSpecName "kube-api-access-2vsq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.078999 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e585d4a-7315-4ad3-a670-4b7eff004054" (UID: "4e585d4a-7315-4ad3-a670-4b7eff004054"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.079186 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4e585d4a-7315-4ad3-a670-4b7eff004054" (UID: "4e585d4a-7315-4ad3-a670-4b7eff004054"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.080000 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e585d4a-7315-4ad3-a670-4b7eff004054" (UID: "4e585d4a-7315-4ad3-a670-4b7eff004054"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.098191 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e585d4a-7315-4ad3-a670-4b7eff004054" (UID: "4e585d4a-7315-4ad3-a670-4b7eff004054"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.121512 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-config" (OuterVolumeSpecName: "config") pod "4e585d4a-7315-4ad3-a670-4b7eff004054" (UID: "4e585d4a-7315-4ad3-a670-4b7eff004054"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.150978 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.151017 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.151031 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.151043 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vsq5\" (UniqueName: \"kubernetes.io/projected/4e585d4a-7315-4ad3-a670-4b7eff004054-kube-api-access-2vsq5\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.151056 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.151066 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4e585d4a-7315-4ad3-a670-4b7eff004054-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.583368 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" event={"ID":"4e585d4a-7315-4ad3-a670-4b7eff004054","Type":"ContainerDied","Data":"624274ffec2a792a6f88260c0348be5f19cc683a980e5009af525ac31e2c2361"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.583786 4992 scope.go:117] "RemoveContainer" containerID="7acdea636e592f5abbc1396d056cd6666aeca73c74da2ff73d9b058685c46b6a" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.583389 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-5j6cp" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.592657 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66db4d95cb-74j4r" event={"ID":"9d96d34a-cfab-41e2-b77e-679b9a0a8a23","Type":"ContainerStarted","Data":"387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.592718 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66db4d95cb-74j4r" event={"ID":"9d96d34a-cfab-41e2-b77e-679b9a0a8a23","Type":"ContainerStarted","Data":"a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.592765 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.605925 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jqlwz" event={"ID":"953feb5a-bed1-4457-b25e-fa4716bbad75","Type":"ContainerStarted","Data":"d298afd08c4ad8bf5a226067f04101999cfbec017a9eaac331c44b6366535b49"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.616306 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d94866685-kpw9g" event={"ID":"f10bd5d3-8ab6-4950-96e9-b683e47619ea","Type":"ContainerStarted","Data":"c542796f510b77479357b693a9b389ceeb57b8343ab11d927e40cdd267065d8f"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.616721 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.616831 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.617575 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66db4d95cb-74j4r" podStartSLOduration=30.617558823 podStartE2EDuration="30.617558823s" podCreationTimestamp="2025-12-11 08:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.615766669 +0000 UTC m=+1322.875240595" watchObservedRunningTime="2025-12-11 08:44:58.617558823 +0000 UTC m=+1322.877032749" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.629352 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7978c485bf-hpg7n" event={"ID":"04b4ce41-af3f-42d1-a340-e3d20519f217","Type":"ContainerStarted","Data":"1baf1e883f0cf06d16b259470ac7f8827ac220f4bd22590d40b5515c22562e91"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.629409 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7978c485bf-hpg7n" event={"ID":"04b4ce41-af3f-42d1-a340-e3d20519f217","Type":"ContainerStarted","Data":"bc56867e05e257f3b980404ccd451290bce0b81b71db89bdbaa3d07d76375c88"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.630347 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.641207 4992 generic.go:334] "Generic (PLEG): container finished" podID="79144990-622b-4d1b-8f2d-26707a7a6bd2" containerID="ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c" exitCode=0 Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.641303 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" event={"ID":"79144990-622b-4d1b-8f2d-26707a7a6bd2","Type":"ContainerDied","Data":"ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.649749 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerName="cinder-api-log" containerID="cri-o://108bff959be231fc9046f279579da2c6230602bf6a574f77927c75176cbee35b" gracePeriod=30 Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.650049 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3ec8451b-e888-4b84-8cc3-0185265b8eae","Type":"ContainerStarted","Data":"76073bdf27df8053ec5a228eac049a38d86fb0df0cbafe1849d9cc737f647d18"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.650108 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.650143 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerName="cinder-api" containerID="cri-o://76073bdf27df8053ec5a228eac049a38d86fb0df0cbafe1849d9cc737f647d18" gracePeriod=30 Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.670785 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c59a-account-create-update-db54s" event={"ID":"0b8792cf-dd70-4b14-b007-9e6ed8632bd6","Type":"ContainerStarted","Data":"5c3a2e0e6d6cfd7220c851fd102ab63a76e1473d65aaa29c5c451f2b747ec2e2"} Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.685400 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-5j6cp"] Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.696016 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-5j6cp"] Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.736554 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-jqlwz" podStartSLOduration=28.736533067 podStartE2EDuration="28.736533067s" podCreationTimestamp="2025-12-11 08:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.702141065 +0000 UTC m=+1322.961614991" watchObservedRunningTime="2025-12-11 08:44:58.736533067 +0000 UTC m=+1322.996006993" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.751134 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-580c-account-create-update-5jxfz" podStartSLOduration=27.751112595 podStartE2EDuration="27.751112595s" podCreationTimestamp="2025-12-11 08:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.718540796 +0000 UTC m=+1322.978014722" watchObservedRunningTime="2025-12-11 08:44:58.751112595 +0000 UTC m=+1323.010586521" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.767096 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-tnwhj" podStartSLOduration=28.767070915 podStartE2EDuration="28.767070915s" podCreationTimestamp="2025-12-11 08:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.746016269 +0000 UTC m=+1323.005490185" watchObservedRunningTime="2025-12-11 08:44:58.767070915 +0000 UTC m=+1323.026544841" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.784231 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-x24wj" podStartSLOduration=28.784207265 podStartE2EDuration="28.784207265s" podCreationTimestamp="2025-12-11 08:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.760584426 +0000 UTC m=+1323.020058362" watchObservedRunningTime="2025-12-11 08:44:58.784207265 +0000 UTC m=+1323.043681201" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.804744 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-d94866685-kpw9g" podStartSLOduration=30.804722058 podStartE2EDuration="30.804722058s" podCreationTimestamp="2025-12-11 08:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.802528343 +0000 UTC m=+1323.062002269" watchObservedRunningTime="2025-12-11 08:44:58.804722058 +0000 UTC m=+1323.064195984" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.829409 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-c9e3-account-create-update-vxlqn" podStartSLOduration=28.829390471 podStartE2EDuration="28.829390471s" podCreationTimestamp="2025-12-11 08:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.821801646 +0000 UTC m=+1323.081275582" watchObservedRunningTime="2025-12-11 08:44:58.829390471 +0000 UTC m=+1323.088864397" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.837297 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-c59a-account-create-update-db54s" podStartSLOduration=28.837279385 podStartE2EDuration="28.837279385s" podCreationTimestamp="2025-12-11 08:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.836544788 +0000 UTC m=+1323.096018714" watchObservedRunningTime="2025-12-11 08:44:58.837279385 +0000 UTC m=+1323.096753311" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.863256 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7978c485bf-hpg7n" podStartSLOduration=25.863233802 podStartE2EDuration="25.863233802s" podCreationTimestamp="2025-12-11 08:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.85622748 +0000 UTC m=+1323.115701406" watchObservedRunningTime="2025-12-11 08:44:58.863233802 +0000 UTC m=+1323.122707728" Dec 11 08:44:58 crc kubenswrapper[4992]: I1211 08:44:58.895054 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=30.89502844 podStartE2EDuration="30.89502844s" podCreationTimestamp="2025-12-11 08:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:44:58.879601022 +0000 UTC m=+1323.139074968" watchObservedRunningTime="2025-12-11 08:44:58.89502844 +0000 UTC m=+1323.154502366" Dec 11 08:44:59 crc kubenswrapper[4992]: I1211 08:44:59.697276 4992 generic.go:334] "Generic (PLEG): container finished" podID="88276bfc-171a-4c6d-b2a8-342e9a6f856d" containerID="32c7226013f1e776450786bea9294c2b1844b1f8af8fd9332996ee055cfbaac0" exitCode=0 Dec 11 08:44:59 crc kubenswrapper[4992]: I1211 08:44:59.697367 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x24wj" event={"ID":"88276bfc-171a-4c6d-b2a8-342e9a6f856d","Type":"ContainerDied","Data":"32c7226013f1e776450786bea9294c2b1844b1f8af8fd9332996ee055cfbaac0"} Dec 11 08:44:59 crc kubenswrapper[4992]: I1211 08:44:59.701744 4992 generic.go:334] "Generic (PLEG): container finished" podID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerID="76073bdf27df8053ec5a228eac049a38d86fb0df0cbafe1849d9cc737f647d18" exitCode=0 Dec 11 08:44:59 crc kubenswrapper[4992]: I1211 08:44:59.701898 4992 generic.go:334] "Generic (PLEG): container finished" podID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerID="108bff959be231fc9046f279579da2c6230602bf6a574f77927c75176cbee35b" exitCode=143 Dec 11 08:44:59 crc kubenswrapper[4992]: I1211 08:44:59.702868 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3ec8451b-e888-4b84-8cc3-0185265b8eae","Type":"ContainerDied","Data":"76073bdf27df8053ec5a228eac049a38d86fb0df0cbafe1849d9cc737f647d18"} Dec 11 08:44:59 crc kubenswrapper[4992]: I1211 08:44:59.702998 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3ec8451b-e888-4b84-8cc3-0185265b8eae","Type":"ContainerDied","Data":"108bff959be231fc9046f279579da2c6230602bf6a574f77927c75176cbee35b"} Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.048331 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.113729 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e585d4a-7315-4ad3-a670-4b7eff004054" path="/var/lib/kubelet/pods/4e585d4a-7315-4ad3-a670-4b7eff004054/volumes" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.136407 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd"] Dec 11 08:45:00 crc kubenswrapper[4992]: E1211 08:45:00.137186 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerName="cinder-api-log" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.137209 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerName="cinder-api-log" Dec 11 08:45:00 crc kubenswrapper[4992]: E1211 08:45:00.137218 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e585d4a-7315-4ad3-a670-4b7eff004054" containerName="init" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.137224 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e585d4a-7315-4ad3-a670-4b7eff004054" containerName="init" Dec 11 08:45:00 crc kubenswrapper[4992]: E1211 08:45:00.137243 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerName="cinder-api" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.137250 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerName="cinder-api" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.137462 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e585d4a-7315-4ad3-a670-4b7eff004054" containerName="init" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.137479 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerName="cinder-api-log" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.137491 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec8451b-e888-4b84-8cc3-0185265b8eae" containerName="cinder-api" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.138435 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.140388 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.140552 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.152343 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd"] Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.206563 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data\") pod \"3ec8451b-e888-4b84-8cc3-0185265b8eae\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.206715 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-scripts\") pod \"3ec8451b-e888-4b84-8cc3-0185265b8eae\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.206774 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data-custom\") pod \"3ec8451b-e888-4b84-8cc3-0185265b8eae\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.206819 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26kft\" (UniqueName: \"kubernetes.io/projected/3ec8451b-e888-4b84-8cc3-0185265b8eae-kube-api-access-26kft\") pod \"3ec8451b-e888-4b84-8cc3-0185265b8eae\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.206873 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ec8451b-e888-4b84-8cc3-0185265b8eae-etc-machine-id\") pod \"3ec8451b-e888-4b84-8cc3-0185265b8eae\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.206900 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-combined-ca-bundle\") pod \"3ec8451b-e888-4b84-8cc3-0185265b8eae\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.206962 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ec8451b-e888-4b84-8cc3-0185265b8eae-logs\") pod \"3ec8451b-e888-4b84-8cc3-0185265b8eae\" (UID: \"3ec8451b-e888-4b84-8cc3-0185265b8eae\") " Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.207671 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ec8451b-e888-4b84-8cc3-0185265b8eae-logs" (OuterVolumeSpecName: "logs") pod "3ec8451b-e888-4b84-8cc3-0185265b8eae" (UID: "3ec8451b-e888-4b84-8cc3-0185265b8eae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.208134 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ec8451b-e888-4b84-8cc3-0185265b8eae-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3ec8451b-e888-4b84-8cc3-0185265b8eae" (UID: "3ec8451b-e888-4b84-8cc3-0185265b8eae"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.211328 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec8451b-e888-4b84-8cc3-0185265b8eae-kube-api-access-26kft" (OuterVolumeSpecName: "kube-api-access-26kft") pod "3ec8451b-e888-4b84-8cc3-0185265b8eae" (UID: "3ec8451b-e888-4b84-8cc3-0185265b8eae"). InnerVolumeSpecName "kube-api-access-26kft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.211612 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-scripts" (OuterVolumeSpecName: "scripts") pod "3ec8451b-e888-4b84-8cc3-0185265b8eae" (UID: "3ec8451b-e888-4b84-8cc3-0185265b8eae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.211769 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3ec8451b-e888-4b84-8cc3-0185265b8eae" (UID: "3ec8451b-e888-4b84-8cc3-0185265b8eae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.233904 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ec8451b-e888-4b84-8cc3-0185265b8eae" (UID: "3ec8451b-e888-4b84-8cc3-0185265b8eae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.255230 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data" (OuterVolumeSpecName: "config-data") pod "3ec8451b-e888-4b84-8cc3-0185265b8eae" (UID: "3ec8451b-e888-4b84-8cc3-0185265b8eae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309545 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8pvw\" (UniqueName: \"kubernetes.io/projected/5a301df2-43ad-4899-8ac7-548594484377-kube-api-access-k8pvw\") pod \"collect-profiles-29424045-gtqsd\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309672 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a301df2-43ad-4899-8ac7-548594484377-secret-volume\") pod \"collect-profiles-29424045-gtqsd\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309697 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a301df2-43ad-4899-8ac7-548594484377-config-volume\") pod \"collect-profiles-29424045-gtqsd\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309809 4992 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309822 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26kft\" (UniqueName: \"kubernetes.io/projected/3ec8451b-e888-4b84-8cc3-0185265b8eae-kube-api-access-26kft\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309832 4992 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ec8451b-e888-4b84-8cc3-0185265b8eae-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309842 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309851 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ec8451b-e888-4b84-8cc3-0185265b8eae-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309859 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.309867 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ec8451b-e888-4b84-8cc3-0185265b8eae-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.411856 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a301df2-43ad-4899-8ac7-548594484377-secret-volume\") pod \"collect-profiles-29424045-gtqsd\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.411908 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a301df2-43ad-4899-8ac7-548594484377-config-volume\") pod \"collect-profiles-29424045-gtqsd\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.411994 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8pvw\" (UniqueName: \"kubernetes.io/projected/5a301df2-43ad-4899-8ac7-548594484377-kube-api-access-k8pvw\") pod \"collect-profiles-29424045-gtqsd\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.413418 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a301df2-43ad-4899-8ac7-548594484377-config-volume\") pod \"collect-profiles-29424045-gtqsd\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.415135 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a301df2-43ad-4899-8ac7-548594484377-secret-volume\") pod \"collect-profiles-29424045-gtqsd\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.426825 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8pvw\" (UniqueName: \"kubernetes.io/projected/5a301df2-43ad-4899-8ac7-548594484377-kube-api-access-k8pvw\") pod \"collect-profiles-29424045-gtqsd\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.459665 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.710479 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" event={"ID":"79144990-622b-4d1b-8f2d-26707a7a6bd2","Type":"ContainerStarted","Data":"b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e"} Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.713341 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.721588 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3ec8451b-e888-4b84-8cc3-0185265b8eae","Type":"ContainerDied","Data":"7d3cd0d2f14214c5f870645dec304f1010cc771d9eccf1c40e4268796dde2021"} Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.721657 4992 scope.go:117] "RemoveContainer" containerID="76073bdf27df8053ec5a228eac049a38d86fb0df0cbafe1849d9cc737f647d18" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.745770 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.754020 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.767858 4992 scope.go:117] "RemoveContainer" containerID="108bff959be231fc9046f279579da2c6230602bf6a574f77927c75176cbee35b" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.776566 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.778805 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.783703 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.784055 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.784332 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.794176 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.893168 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd"] Dec 11 08:45:00 crc kubenswrapper[4992]: W1211 08:45:00.915954 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a301df2_43ad_4899_8ac7_548594484377.slice/crio-ba3a0047f5be0e5b7974f275b0203addd0dcec8f944f987b746d08d202ee4720 WatchSource:0}: Error finding container ba3a0047f5be0e5b7974f275b0203addd0dcec8f944f987b746d08d202ee4720: Status 404 returned error can't find the container with id ba3a0047f5be0e5b7974f275b0203addd0dcec8f944f987b746d08d202ee4720 Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.926427 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-scripts\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.926473 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.926500 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-config-data\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.926517 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bca5ca17-107c-4c9b-8901-dcf3f962e927-logs\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.926575 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bca5ca17-107c-4c9b-8901-dcf3f962e927-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.926610 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-config-data-custom\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.926712 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.926761 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29zxk\" (UniqueName: \"kubernetes.io/projected/bca5ca17-107c-4c9b-8901-dcf3f962e927-kube-api-access-29zxk\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:00 crc kubenswrapper[4992]: I1211 08:45:00.926816 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.012948 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.029463 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-scripts\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.029523 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.029559 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-config-data\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.029586 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bca5ca17-107c-4c9b-8901-dcf3f962e927-logs\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.029683 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bca5ca17-107c-4c9b-8901-dcf3f962e927-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.029718 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-config-data-custom\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.029755 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.029811 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29zxk\" (UniqueName: \"kubernetes.io/projected/bca5ca17-107c-4c9b-8901-dcf3f962e927-kube-api-access-29zxk\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.029881 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.030439 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bca5ca17-107c-4c9b-8901-dcf3f962e927-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.031003 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bca5ca17-107c-4c9b-8901-dcf3f962e927-logs\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.035386 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.035533 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.035864 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-config-data-custom\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.036003 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.036201 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-scripts\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.038301 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bca5ca17-107c-4c9b-8901-dcf3f962e927-config-data\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.048261 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29zxk\" (UniqueName: \"kubernetes.io/projected/bca5ca17-107c-4c9b-8901-dcf3f962e927-kube-api-access-29zxk\") pod \"cinder-api-0\" (UID: \"bca5ca17-107c-4c9b-8901-dcf3f962e927\") " pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.111003 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.131570 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5947k\" (UniqueName: \"kubernetes.io/projected/88276bfc-171a-4c6d-b2a8-342e9a6f856d-kube-api-access-5947k\") pod \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\" (UID: \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\") " Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.131747 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88276bfc-171a-4c6d-b2a8-342e9a6f856d-operator-scripts\") pod \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\" (UID: \"88276bfc-171a-4c6d-b2a8-342e9a6f856d\") " Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.132223 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88276bfc-171a-4c6d-b2a8-342e9a6f856d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88276bfc-171a-4c6d-b2a8-342e9a6f856d" (UID: "88276bfc-171a-4c6d-b2a8-342e9a6f856d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.133112 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88276bfc-171a-4c6d-b2a8-342e9a6f856d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.135273 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88276bfc-171a-4c6d-b2a8-342e9a6f856d-kube-api-access-5947k" (OuterVolumeSpecName: "kube-api-access-5947k") pod "88276bfc-171a-4c6d-b2a8-342e9a6f856d" (UID: "88276bfc-171a-4c6d-b2a8-342e9a6f856d"). InnerVolumeSpecName "kube-api-access-5947k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.252223 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5947k\" (UniqueName: \"kubernetes.io/projected/88276bfc-171a-4c6d-b2a8-342e9a6f856d-kube-api-access-5947k\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.606171 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.734855 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bca5ca17-107c-4c9b-8901-dcf3f962e927","Type":"ContainerStarted","Data":"aac177885dfed7219cd3fbdb6607cb3a66ac2f1517b82de1de61402a9d789c71"} Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.736337 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-x24wj" event={"ID":"88276bfc-171a-4c6d-b2a8-342e9a6f856d","Type":"ContainerDied","Data":"17cf91bc7a6a8b3c199ece4c33439b9721f42af25ec643fe37c36c33e366036d"} Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.736365 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17cf91bc7a6a8b3c199ece4c33439b9721f42af25ec643fe37c36c33e366036d" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.736428 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-x24wj" Dec 11 08:45:01 crc kubenswrapper[4992]: I1211 08:45:01.737340 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" event={"ID":"5a301df2-43ad-4899-8ac7-548594484377","Type":"ContainerStarted","Data":"ba3a0047f5be0e5b7974f275b0203addd0dcec8f944f987b746d08d202ee4720"} Dec 11 08:45:02 crc kubenswrapper[4992]: I1211 08:45:02.115300 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec8451b-e888-4b84-8cc3-0185265b8eae" path="/var/lib/kubelet/pods/3ec8451b-e888-4b84-8cc3-0185265b8eae/volumes" Dec 11 08:45:03 crc kubenswrapper[4992]: I1211 08:45:03.471171 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:45:03 crc kubenswrapper[4992]: I1211 08:45:03.472260 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d94866685-kpw9g" Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.618647 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7978c485bf-hpg7n" podUID="04b4ce41-af3f-42d1-a340-e3d20519f217" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.619328 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-7978c485bf-hpg7n" podUID="04b4ce41-af3f-42d1-a340-e3d20519f217" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.631446 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7978c485bf-hpg7n" podUID="04b4ce41-af3f-42d1-a340-e3d20519f217" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.763306 4992 generic.go:334] "Generic (PLEG): container finished" podID="953feb5a-bed1-4457-b25e-fa4716bbad75" containerID="d298afd08c4ad8bf5a226067f04101999cfbec017a9eaac331c44b6366535b49" exitCode=0 Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.763477 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jqlwz" event={"ID":"953feb5a-bed1-4457-b25e-fa4716bbad75","Type":"ContainerDied","Data":"d298afd08c4ad8bf5a226067f04101999cfbec017a9eaac331c44b6366535b49"} Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.766311 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bca5ca17-107c-4c9b-8901-dcf3f962e927","Type":"ContainerStarted","Data":"f52567e5f5252bd7e934cb36f0d93897f2a0d7eb96fe8d1a9d69aeccd14b3746"} Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.771357 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" event={"ID":"5a301df2-43ad-4899-8ac7-548594484377","Type":"ContainerStarted","Data":"ad474b79d8425729a102bafcb29bc5ad5541b536801b099413d7991c28175d5b"} Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.771409 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.775813 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.898850 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" podStartSLOduration=36.898828047 podStartE2EDuration="36.898828047s" podCreationTimestamp="2025-12-11 08:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:04.803564513 +0000 UTC m=+1329.063038439" watchObservedRunningTime="2025-12-11 08:45:04.898828047 +0000 UTC m=+1329.158301973" Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.903812 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" podStartSLOduration=4.903797759 podStartE2EDuration="4.903797759s" podCreationTimestamp="2025-12-11 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:04.838966081 +0000 UTC m=+1329.098440007" watchObservedRunningTime="2025-12-11 08:45:04.903797759 +0000 UTC m=+1329.163271685" Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.919235 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.926849 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s8vfc"] Dec 11 08:45:04 crc kubenswrapper[4992]: I1211 08:45:04.927066 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" podUID="ff4bb71d-a1e0-4bb1-8510-ee381c395f87" containerName="dnsmasq-dns" containerID="cri-o://7e674cfad584638aebe91ee68571d1145ef19ae4f709c29b2ce1505d2bccc553" gracePeriod=10 Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.783696 4992 generic.go:334] "Generic (PLEG): container finished" podID="ff4bb71d-a1e0-4bb1-8510-ee381c395f87" containerID="7e674cfad584638aebe91ee68571d1145ef19ae4f709c29b2ce1505d2bccc553" exitCode=0 Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.783780 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" event={"ID":"ff4bb71d-a1e0-4bb1-8510-ee381c395f87","Type":"ContainerDied","Data":"7e674cfad584638aebe91ee68571d1145ef19ae4f709c29b2ce1505d2bccc553"} Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.785543 4992 generic.go:334] "Generic (PLEG): container finished" podID="5a301df2-43ad-4899-8ac7-548594484377" containerID="ad474b79d8425729a102bafcb29bc5ad5541b536801b099413d7991c28175d5b" exitCode=0 Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.785609 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" event={"ID":"5a301df2-43ad-4899-8ac7-548594484377","Type":"ContainerDied","Data":"ad474b79d8425729a102bafcb29bc5ad5541b536801b099413d7991c28175d5b"} Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.788650 4992 generic.go:334] "Generic (PLEG): container finished" podID="94ba867d-ba4d-4bb1-81aa-b46fa62f43bd" containerID="a2ddd2ace3bdbaabf11623749930b58193de83e7a692b0fdb1b7bf1c165d487d" exitCode=0 Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.788718 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-580c-account-create-update-5jxfz" event={"ID":"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd","Type":"ContainerDied","Data":"a2ddd2ace3bdbaabf11623749930b58193de83e7a692b0fdb1b7bf1c165d487d"} Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.790420 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bca5ca17-107c-4c9b-8901-dcf3f962e927","Type":"ContainerStarted","Data":"41c7f5724ffc79bd7b2ed3b1e6ec5d2962d6f3e1dcb0a0a68df135233fe87ebd"} Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.791144 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.792481 4992 generic.go:334] "Generic (PLEG): container finished" podID="c870d81b-61d3-4eb2-b408-ce51fae0e19f" containerID="713036c0ac8c668432994f997bd6b71000fd453e2752a8bed220c38b35a448f8" exitCode=0 Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.792705 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tnwhj" event={"ID":"c870d81b-61d3-4eb2-b408-ce51fae0e19f","Type":"ContainerDied","Data":"713036c0ac8c668432994f997bd6b71000fd453e2752a8bed220c38b35a448f8"} Dec 11 08:45:05 crc kubenswrapper[4992]: I1211 08:45:05.819324 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.819303646 podStartE2EDuration="5.819303646s" podCreationTimestamp="2025-12-11 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:05.817747258 +0000 UTC m=+1330.077221194" watchObservedRunningTime="2025-12-11 08:45:05.819303646 +0000 UTC m=+1330.078777572" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.311615 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.318553 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.458869 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953feb5a-bed1-4457-b25e-fa4716bbad75-operator-scripts\") pod \"953feb5a-bed1-4457-b25e-fa4716bbad75\" (UID: \"953feb5a-bed1-4457-b25e-fa4716bbad75\") " Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.459262 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6cbw\" (UniqueName: \"kubernetes.io/projected/953feb5a-bed1-4457-b25e-fa4716bbad75-kube-api-access-c6cbw\") pod \"953feb5a-bed1-4457-b25e-fa4716bbad75\" (UID: \"953feb5a-bed1-4457-b25e-fa4716bbad75\") " Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.459316 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-sb\") pod \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.459373 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-config\") pod \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.459396 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-swift-storage-0\") pod \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.459438 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-svc\") pod \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.459470 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7x45\" (UniqueName: \"kubernetes.io/projected/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-kube-api-access-t7x45\") pod \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.459555 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-nb\") pod \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\" (UID: \"ff4bb71d-a1e0-4bb1-8510-ee381c395f87\") " Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.460211 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953feb5a-bed1-4457-b25e-fa4716bbad75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "953feb5a-bed1-4457-b25e-fa4716bbad75" (UID: "953feb5a-bed1-4457-b25e-fa4716bbad75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.471735 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953feb5a-bed1-4457-b25e-fa4716bbad75-kube-api-access-c6cbw" (OuterVolumeSpecName: "kube-api-access-c6cbw") pod "953feb5a-bed1-4457-b25e-fa4716bbad75" (UID: "953feb5a-bed1-4457-b25e-fa4716bbad75"). InnerVolumeSpecName "kube-api-access-c6cbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.472166 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-kube-api-access-t7x45" (OuterVolumeSpecName: "kube-api-access-t7x45") pod "ff4bb71d-a1e0-4bb1-8510-ee381c395f87" (UID: "ff4bb71d-a1e0-4bb1-8510-ee381c395f87"). InnerVolumeSpecName "kube-api-access-t7x45". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.533287 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ff4bb71d-a1e0-4bb1-8510-ee381c395f87" (UID: "ff4bb71d-a1e0-4bb1-8510-ee381c395f87"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.543213 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff4bb71d-a1e0-4bb1-8510-ee381c395f87" (UID: "ff4bb71d-a1e0-4bb1-8510-ee381c395f87"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.560744 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff4bb71d-a1e0-4bb1-8510-ee381c395f87" (UID: "ff4bb71d-a1e0-4bb1-8510-ee381c395f87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.561954 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.561971 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.561980 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7x45\" (UniqueName: \"kubernetes.io/projected/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-kube-api-access-t7x45\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.561990 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953feb5a-bed1-4457-b25e-fa4716bbad75-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.561998 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6cbw\" (UniqueName: \"kubernetes.io/projected/953feb5a-bed1-4457-b25e-fa4716bbad75-kube-api-access-c6cbw\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.562006 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.562877 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff4bb71d-a1e0-4bb1-8510-ee381c395f87" (UID: "ff4bb71d-a1e0-4bb1-8510-ee381c395f87"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.564296 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-config" (OuterVolumeSpecName: "config") pod "ff4bb71d-a1e0-4bb1-8510-ee381c395f87" (UID: "ff4bb71d-a1e0-4bb1-8510-ee381c395f87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.664982 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.665024 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff4bb71d-a1e0-4bb1-8510-ee381c395f87-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.805155 4992 generic.go:334] "Generic (PLEG): container finished" podID="0b8792cf-dd70-4b14-b007-9e6ed8632bd6" containerID="5c3a2e0e6d6cfd7220c851fd102ab63a76e1473d65aaa29c5c451f2b747ec2e2" exitCode=0 Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.805225 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c59a-account-create-update-db54s" event={"ID":"0b8792cf-dd70-4b14-b007-9e6ed8632bd6","Type":"ContainerDied","Data":"5c3a2e0e6d6cfd7220c851fd102ab63a76e1473d65aaa29c5c451f2b747ec2e2"} Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.817212 4992 generic.go:334] "Generic (PLEG): container finished" podID="1da60d29-4843-4e25-804b-a5b89de8f2f2" containerID="3f03e400a7702730c5b7dd5c9dde3e2999049e173690633185b9326c64efa105" exitCode=0 Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.817283 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c9e3-account-create-update-vxlqn" event={"ID":"1da60d29-4843-4e25-804b-a5b89de8f2f2","Type":"ContainerDied","Data":"3f03e400a7702730c5b7dd5c9dde3e2999049e173690633185b9326c64efa105"} Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.819943 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"421fdf51-5a39-4d80-b066-a715006c2f85","Type":"ContainerStarted","Data":"f6ae4a72fc52537bca9754a118aecf55e3614f86770e917e49df27691f68088d"} Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.826157 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jqlwz" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.826164 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jqlwz" event={"ID":"953feb5a-bed1-4457-b25e-fa4716bbad75","Type":"ContainerDied","Data":"c859c97111a8e760dbc825bb5b9533288ec5b3a1be519252f0abe09f011c6474"} Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.826204 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c859c97111a8e760dbc825bb5b9533288ec5b3a1be519252f0abe09f011c6474" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.829971 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" event={"ID":"ff4bb71d-a1e0-4bb1-8510-ee381c395f87","Type":"ContainerDied","Data":"09372a5e2ca9279d4e7ae9bde0e6a458f1a1f5dcfd76e4de54b0a30492695e98"} Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.830016 4992 scope.go:117] "RemoveContainer" containerID="7e674cfad584638aebe91ee68571d1145ef19ae4f709c29b2ce1505d2bccc553" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.830126 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-s8vfc" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.849078 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerStarted","Data":"3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d"} Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.922047 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.148762873 podStartE2EDuration="46.922023519s" podCreationTimestamp="2025-12-11 08:44:20 +0000 UTC" firstStartedPulling="2025-12-11 08:44:21.622081661 +0000 UTC m=+1285.881555587" lastFinishedPulling="2025-12-11 08:45:06.395342307 +0000 UTC m=+1330.654816233" observedRunningTime="2025-12-11 08:45:06.865600688 +0000 UTC m=+1331.125074614" watchObservedRunningTime="2025-12-11 08:45:06.922023519 +0000 UTC m=+1331.181497435" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.922317 4992 scope.go:117] "RemoveContainer" containerID="ff36933e55db3abb3b1c6102b104f6fe2fc53a68c103e8d2cb7a1ded0ca96883" Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.935574 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s8vfc"] Dec 11 08:45:06 crc kubenswrapper[4992]: I1211 08:45:06.944493 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-s8vfc"] Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.300857 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.343940 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.351560 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.401569 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c870d81b-61d3-4eb2-b408-ce51fae0e19f-operator-scripts\") pod \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\" (UID: \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\") " Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.401620 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-operator-scripts\") pod \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\" (UID: \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\") " Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.401671 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a301df2-43ad-4899-8ac7-548594484377-secret-volume\") pod \"5a301df2-43ad-4899-8ac7-548594484377\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.401750 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7m25\" (UniqueName: \"kubernetes.io/projected/c870d81b-61d3-4eb2-b408-ce51fae0e19f-kube-api-access-l7m25\") pod \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\" (UID: \"c870d81b-61d3-4eb2-b408-ce51fae0e19f\") " Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.401770 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8pvw\" (UniqueName: \"kubernetes.io/projected/5a301df2-43ad-4899-8ac7-548594484377-kube-api-access-k8pvw\") pod \"5a301df2-43ad-4899-8ac7-548594484377\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.401800 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6pd2\" (UniqueName: \"kubernetes.io/projected/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-kube-api-access-p6pd2\") pod \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\" (UID: \"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd\") " Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.401822 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a301df2-43ad-4899-8ac7-548594484377-config-volume\") pod \"5a301df2-43ad-4899-8ac7-548594484377\" (UID: \"5a301df2-43ad-4899-8ac7-548594484377\") " Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.402756 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a301df2-43ad-4899-8ac7-548594484377-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a301df2-43ad-4899-8ac7-548594484377" (UID: "5a301df2-43ad-4899-8ac7-548594484377"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.405442 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94ba867d-ba4d-4bb1-81aa-b46fa62f43bd" (UID: "94ba867d-ba4d-4bb1-81aa-b46fa62f43bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.405608 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c870d81b-61d3-4eb2-b408-ce51fae0e19f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c870d81b-61d3-4eb2-b408-ce51fae0e19f" (UID: "c870d81b-61d3-4eb2-b408-ce51fae0e19f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.406432 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c870d81b-61d3-4eb2-b408-ce51fae0e19f-kube-api-access-l7m25" (OuterVolumeSpecName: "kube-api-access-l7m25") pod "c870d81b-61d3-4eb2-b408-ce51fae0e19f" (UID: "c870d81b-61d3-4eb2-b408-ce51fae0e19f"). InnerVolumeSpecName "kube-api-access-l7m25". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.409097 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-kube-api-access-p6pd2" (OuterVolumeSpecName: "kube-api-access-p6pd2") pod "94ba867d-ba4d-4bb1-81aa-b46fa62f43bd" (UID: "94ba867d-ba4d-4bb1-81aa-b46fa62f43bd"). InnerVolumeSpecName "kube-api-access-p6pd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.409587 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a301df2-43ad-4899-8ac7-548594484377-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a301df2-43ad-4899-8ac7-548594484377" (UID: "5a301df2-43ad-4899-8ac7-548594484377"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.416604 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a301df2-43ad-4899-8ac7-548594484377-kube-api-access-k8pvw" (OuterVolumeSpecName: "kube-api-access-k8pvw") pod "5a301df2-43ad-4899-8ac7-548594484377" (UID: "5a301df2-43ad-4899-8ac7-548594484377"). InnerVolumeSpecName "kube-api-access-k8pvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.503241 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c870d81b-61d3-4eb2-b408-ce51fae0e19f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.503279 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.503289 4992 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a301df2-43ad-4899-8ac7-548594484377-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.503298 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7m25\" (UniqueName: \"kubernetes.io/projected/c870d81b-61d3-4eb2-b408-ce51fae0e19f-kube-api-access-l7m25\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.503310 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8pvw\" (UniqueName: \"kubernetes.io/projected/5a301df2-43ad-4899-8ac7-548594484377-kube-api-access-k8pvw\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.503319 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6pd2\" (UniqueName: \"kubernetes.io/projected/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd-kube-api-access-p6pd2\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.503326 4992 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a301df2-43ad-4899-8ac7-548594484377-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.873318 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerStarted","Data":"bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6"} Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.881458 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" event={"ID":"5a301df2-43ad-4899-8ac7-548594484377","Type":"ContainerDied","Data":"ba3a0047f5be0e5b7974f275b0203addd0dcec8f944f987b746d08d202ee4720"} Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.881517 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba3a0047f5be0e5b7974f275b0203addd0dcec8f944f987b746d08d202ee4720" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.881594 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.891293 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-580c-account-create-update-5jxfz" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.891388 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-580c-account-create-update-5jxfz" event={"ID":"94ba867d-ba4d-4bb1-81aa-b46fa62f43bd","Type":"ContainerDied","Data":"5c4d2a3c51797b94a98a4324eee92d0f8aed8be4ce09156a74dd774cde82efe2"} Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.891420 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c4d2a3c51797b94a98a4324eee92d0f8aed8be4ce09156a74dd774cde82efe2" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.894181 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50b99209-7df9-4ae4-9795-37a362cd6373","Type":"ContainerStarted","Data":"4bc98baee37d97daf983b63111e603e6816ed0512660b0f1d97d0cd0d685272f"} Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.897214 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tnwhj" Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.899294 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tnwhj" event={"ID":"c870d81b-61d3-4eb2-b408-ce51fae0e19f","Type":"ContainerDied","Data":"e0834fae258dc7123b4620c7f7985ab7937a5b9e58d0b5b356ab9b9d6b1d409b"} Dec 11 08:45:07 crc kubenswrapper[4992]: I1211 08:45:07.899342 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0834fae258dc7123b4620c7f7985ab7937a5b9e58d0b5b356ab9b9d6b1d409b" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.122174 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4bb71d-a1e0-4bb1-8510-ee381c395f87" path="/var/lib/kubelet/pods/ff4bb71d-a1e0-4bb1-8510-ee381c395f87/volumes" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.395941 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.496302 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.524045 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da60d29-4843-4e25-804b-a5b89de8f2f2-operator-scripts\") pod \"1da60d29-4843-4e25-804b-a5b89de8f2f2\" (UID: \"1da60d29-4843-4e25-804b-a5b89de8f2f2\") " Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.524280 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtlzh\" (UniqueName: \"kubernetes.io/projected/1da60d29-4843-4e25-804b-a5b89de8f2f2-kube-api-access-gtlzh\") pod \"1da60d29-4843-4e25-804b-a5b89de8f2f2\" (UID: \"1da60d29-4843-4e25-804b-a5b89de8f2f2\") " Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.525003 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1da60d29-4843-4e25-804b-a5b89de8f2f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1da60d29-4843-4e25-804b-a5b89de8f2f2" (UID: "1da60d29-4843-4e25-804b-a5b89de8f2f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.526064 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1da60d29-4843-4e25-804b-a5b89de8f2f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.531115 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da60d29-4843-4e25-804b-a5b89de8f2f2-kube-api-access-gtlzh" (OuterVolumeSpecName: "kube-api-access-gtlzh") pod "1da60d29-4843-4e25-804b-a5b89de8f2f2" (UID: "1da60d29-4843-4e25-804b-a5b89de8f2f2"). InnerVolumeSpecName "kube-api-access-gtlzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.628082 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwffd\" (UniqueName: \"kubernetes.io/projected/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-kube-api-access-hwffd\") pod \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\" (UID: \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\") " Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.628174 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-operator-scripts\") pod \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\" (UID: \"0b8792cf-dd70-4b14-b007-9e6ed8632bd6\") " Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.628580 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtlzh\" (UniqueName: \"kubernetes.io/projected/1da60d29-4843-4e25-804b-a5b89de8f2f2-kube-api-access-gtlzh\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.628951 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b8792cf-dd70-4b14-b007-9e6ed8632bd6" (UID: "0b8792cf-dd70-4b14-b007-9e6ed8632bd6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.633032 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-kube-api-access-hwffd" (OuterVolumeSpecName: "kube-api-access-hwffd") pod "0b8792cf-dd70-4b14-b007-9e6ed8632bd6" (UID: "0b8792cf-dd70-4b14-b007-9e6ed8632bd6"). InnerVolumeSpecName "kube-api-access-hwffd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.730680 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwffd\" (UniqueName: \"kubernetes.io/projected/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-kube-api-access-hwffd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.730723 4992 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b8792cf-dd70-4b14-b007-9e6ed8632bd6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.915232 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerStarted","Data":"5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f"} Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.916774 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c59a-account-create-update-db54s" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.916949 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c59a-account-create-update-db54s" event={"ID":"0b8792cf-dd70-4b14-b007-9e6ed8632bd6","Type":"ContainerDied","Data":"0183cae01a8f27a0686d25bf3a6cb2c0fab5ba9d92e34657ebe11a53bcb32d18"} Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.917112 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0183cae01a8f27a0686d25bf3a6cb2c0fab5ba9d92e34657ebe11a53bcb32d18" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.919066 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50b99209-7df9-4ae4-9795-37a362cd6373","Type":"ContainerStarted","Data":"cf0f4155cc7241e85d2d0abdd3f5046f0e1ee51e88cd64af6fbd2e334e53b804"} Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.948408 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c9e3-account-create-update-vxlqn" event={"ID":"1da60d29-4843-4e25-804b-a5b89de8f2f2","Type":"ContainerDied","Data":"1c6ae15fe34579fe77c9265ba119be08c0dd8b714879fab34bf2aa20efb0651a"} Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.948460 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6ae15fe34579fe77c9265ba119be08c0dd8b714879fab34bf2aa20efb0651a" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.948593 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c9e3-account-create-update-vxlqn" Dec 11 08:45:08 crc kubenswrapper[4992]: I1211 08:45:08.961593 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=27.02927336 podStartE2EDuration="40.961574473s" podCreationTimestamp="2025-12-11 08:44:28 +0000 UTC" firstStartedPulling="2025-12-11 08:44:52.44490149 +0000 UTC m=+1316.704375416" lastFinishedPulling="2025-12-11 08:45:06.377202603 +0000 UTC m=+1330.636676529" observedRunningTime="2025-12-11 08:45:08.954965021 +0000 UTC m=+1333.214438957" watchObservedRunningTime="2025-12-11 08:45:08.961574473 +0000 UTC m=+1333.221048399" Dec 11 08:45:09 crc kubenswrapper[4992]: I1211 08:45:09.055924 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 08:45:10 crc kubenswrapper[4992]: I1211 08:45:10.972671 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerStarted","Data":"a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283"} Dec 11 08:45:10 crc kubenswrapper[4992]: I1211 08:45:10.972987 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="sg-core" containerID="cri-o://5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f" gracePeriod=30 Dec 11 08:45:10 crc kubenswrapper[4992]: I1211 08:45:10.973341 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="ceilometer-central-agent" containerID="cri-o://3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d" gracePeriod=30 Dec 11 08:45:10 crc kubenswrapper[4992]: I1211 08:45:10.973029 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="ceilometer-notification-agent" containerID="cri-o://bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6" gracePeriod=30 Dec 11 08:45:10 crc kubenswrapper[4992]: I1211 08:45:10.973014 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="proxy-httpd" containerID="cri-o://a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283" gracePeriod=30 Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.001385 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6175182120000002 podStartE2EDuration="19.001366333s" podCreationTimestamp="2025-12-11 08:44:52 +0000 UTC" firstStartedPulling="2025-12-11 08:44:53.539128635 +0000 UTC m=+1317.798602561" lastFinishedPulling="2025-12-11 08:45:09.922976756 +0000 UTC m=+1334.182450682" observedRunningTime="2025-12-11 08:45:10.994368912 +0000 UTC m=+1335.253842838" watchObservedRunningTime="2025-12-11 08:45:11.001366333 +0000 UTC m=+1335.260840259" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.322965 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n56r8"] Dec 11 08:45:11 crc kubenswrapper[4992]: E1211 08:45:11.323776 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a301df2-43ad-4899-8ac7-548594484377" containerName="collect-profiles" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.323851 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a301df2-43ad-4899-8ac7-548594484377" containerName="collect-profiles" Dec 11 08:45:11 crc kubenswrapper[4992]: E1211 08:45:11.323937 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ba867d-ba4d-4bb1-81aa-b46fa62f43bd" containerName="mariadb-account-create-update" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.323991 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ba867d-ba4d-4bb1-81aa-b46fa62f43bd" containerName="mariadb-account-create-update" Dec 11 08:45:11 crc kubenswrapper[4992]: E1211 08:45:11.324055 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953feb5a-bed1-4457-b25e-fa4716bbad75" containerName="mariadb-database-create" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.324108 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="953feb5a-bed1-4457-b25e-fa4716bbad75" containerName="mariadb-database-create" Dec 11 08:45:11 crc kubenswrapper[4992]: E1211 08:45:11.324172 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8792cf-dd70-4b14-b007-9e6ed8632bd6" containerName="mariadb-account-create-update" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.324224 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8792cf-dd70-4b14-b007-9e6ed8632bd6" containerName="mariadb-account-create-update" Dec 11 08:45:11 crc kubenswrapper[4992]: E1211 08:45:11.324289 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c870d81b-61d3-4eb2-b408-ce51fae0e19f" containerName="mariadb-database-create" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.324340 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="c870d81b-61d3-4eb2-b408-ce51fae0e19f" containerName="mariadb-database-create" Dec 11 08:45:11 crc kubenswrapper[4992]: E1211 08:45:11.324395 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88276bfc-171a-4c6d-b2a8-342e9a6f856d" containerName="mariadb-database-create" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.324491 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="88276bfc-171a-4c6d-b2a8-342e9a6f856d" containerName="mariadb-database-create" Dec 11 08:45:11 crc kubenswrapper[4992]: E1211 08:45:11.324574 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4bb71d-a1e0-4bb1-8510-ee381c395f87" containerName="dnsmasq-dns" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.324653 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4bb71d-a1e0-4bb1-8510-ee381c395f87" containerName="dnsmasq-dns" Dec 11 08:45:11 crc kubenswrapper[4992]: E1211 08:45:11.324720 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da60d29-4843-4e25-804b-a5b89de8f2f2" containerName="mariadb-account-create-update" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.324776 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da60d29-4843-4e25-804b-a5b89de8f2f2" containerName="mariadb-account-create-update" Dec 11 08:45:11 crc kubenswrapper[4992]: E1211 08:45:11.324835 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4bb71d-a1e0-4bb1-8510-ee381c395f87" containerName="init" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.324886 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4bb71d-a1e0-4bb1-8510-ee381c395f87" containerName="init" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.325096 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da60d29-4843-4e25-804b-a5b89de8f2f2" containerName="mariadb-account-create-update" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.325176 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4bb71d-a1e0-4bb1-8510-ee381c395f87" containerName="dnsmasq-dns" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.325236 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="88276bfc-171a-4c6d-b2a8-342e9a6f856d" containerName="mariadb-database-create" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.325289 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b8792cf-dd70-4b14-b007-9e6ed8632bd6" containerName="mariadb-account-create-update" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.325346 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="c870d81b-61d3-4eb2-b408-ce51fae0e19f" containerName="mariadb-database-create" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.325406 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="953feb5a-bed1-4457-b25e-fa4716bbad75" containerName="mariadb-database-create" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.325464 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a301df2-43ad-4899-8ac7-548594484377" containerName="collect-profiles" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.325521 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ba867d-ba4d-4bb1-81aa-b46fa62f43bd" containerName="mariadb-account-create-update" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.326127 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.328052 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xzp6c" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.329174 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.329193 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.342807 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n56r8"] Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.484811 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-config-data\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.484892 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.484937 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-scripts\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.485008 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqcg\" (UniqueName: \"kubernetes.io/projected/8ead296a-d746-4d8b-a8c5-b51c08bf2422-kube-api-access-rcqcg\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.587183 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.587232 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-scripts\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.587293 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqcg\" (UniqueName: \"kubernetes.io/projected/8ead296a-d746-4d8b-a8c5-b51c08bf2422-kube-api-access-rcqcg\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.587422 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-config-data\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.593702 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-config-data\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.596143 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.596155 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-scripts\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.611388 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqcg\" (UniqueName: \"kubernetes.io/projected/8ead296a-d746-4d8b-a8c5-b51c08bf2422-kube-api-access-rcqcg\") pod \"nova-cell0-conductor-db-sync-n56r8\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.663357 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.985271 4992 generic.go:334] "Generic (PLEG): container finished" podID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerID="a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283" exitCode=0 Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.985622 4992 generic.go:334] "Generic (PLEG): container finished" podID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerID="5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f" exitCode=2 Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.985652 4992 generic.go:334] "Generic (PLEG): container finished" podID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerID="bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6" exitCode=0 Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.985353 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerDied","Data":"a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283"} Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.985701 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerDied","Data":"5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f"} Dec 11 08:45:11 crc kubenswrapper[4992]: I1211 08:45:11.985720 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerDied","Data":"bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6"} Dec 11 08:45:12 crc kubenswrapper[4992]: I1211 08:45:12.163377 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n56r8"] Dec 11 08:45:12 crc kubenswrapper[4992]: W1211 08:45:12.167360 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ead296a_d746_4d8b_a8c5_b51c08bf2422.slice/crio-96e14128f30d2e6d3606d225ccc37dd0bb3193ed54aec8f741f49675e844f059 WatchSource:0}: Error finding container 96e14128f30d2e6d3606d225ccc37dd0bb3193ed54aec8f741f49675e844f059: Status 404 returned error can't find the container with id 96e14128f30d2e6d3606d225ccc37dd0bb3193ed54aec8f741f49675e844f059 Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.001313 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n56r8" event={"ID":"8ead296a-d746-4d8b-a8c5-b51c08bf2422","Type":"ContainerStarted","Data":"96e14128f30d2e6d3606d225ccc37dd0bb3193ed54aec8f741f49675e844f059"} Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.299922 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.566772 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.732299 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-scripts\") pod \"0551f53f-db50-4bbe-9cce-d605d03bd91f\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.732359 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-log-httpd\") pod \"0551f53f-db50-4bbe-9cce-d605d03bd91f\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.732384 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-run-httpd\") pod \"0551f53f-db50-4bbe-9cce-d605d03bd91f\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.732750 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0551f53f-db50-4bbe-9cce-d605d03bd91f" (UID: "0551f53f-db50-4bbe-9cce-d605d03bd91f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.732823 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-combined-ca-bundle\") pod \"0551f53f-db50-4bbe-9cce-d605d03bd91f\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.732884 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0551f53f-db50-4bbe-9cce-d605d03bd91f" (UID: "0551f53f-db50-4bbe-9cce-d605d03bd91f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.733235 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-sg-core-conf-yaml\") pod \"0551f53f-db50-4bbe-9cce-d605d03bd91f\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.733290 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84hzh\" (UniqueName: \"kubernetes.io/projected/0551f53f-db50-4bbe-9cce-d605d03bd91f-kube-api-access-84hzh\") pod \"0551f53f-db50-4bbe-9cce-d605d03bd91f\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.733377 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-config-data\") pod \"0551f53f-db50-4bbe-9cce-d605d03bd91f\" (UID: \"0551f53f-db50-4bbe-9cce-d605d03bd91f\") " Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.734015 4992 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.734044 4992 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0551f53f-db50-4bbe-9cce-d605d03bd91f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.739708 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0551f53f-db50-4bbe-9cce-d605d03bd91f-kube-api-access-84hzh" (OuterVolumeSpecName: "kube-api-access-84hzh") pod "0551f53f-db50-4bbe-9cce-d605d03bd91f" (UID: "0551f53f-db50-4bbe-9cce-d605d03bd91f"). InnerVolumeSpecName "kube-api-access-84hzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.745781 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-scripts" (OuterVolumeSpecName: "scripts") pod "0551f53f-db50-4bbe-9cce-d605d03bd91f" (UID: "0551f53f-db50-4bbe-9cce-d605d03bd91f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.781580 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0551f53f-db50-4bbe-9cce-d605d03bd91f" (UID: "0551f53f-db50-4bbe-9cce-d605d03bd91f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.835429 4992 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.835461 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84hzh\" (UniqueName: \"kubernetes.io/projected/0551f53f-db50-4bbe-9cce-d605d03bd91f-kube-api-access-84hzh\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.835471 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.838298 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-config-data" (OuterVolumeSpecName: "config-data") pod "0551f53f-db50-4bbe-9cce-d605d03bd91f" (UID: "0551f53f-db50-4bbe-9cce-d605d03bd91f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.843945 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0551f53f-db50-4bbe-9cce-d605d03bd91f" (UID: "0551f53f-db50-4bbe-9cce-d605d03bd91f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.936810 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:13 crc kubenswrapper[4992]: I1211 08:45:13.936843 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0551f53f-db50-4bbe-9cce-d605d03bd91f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.012821 4992 generic.go:334] "Generic (PLEG): container finished" podID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerID="3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d" exitCode=0 Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.012869 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerDied","Data":"3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d"} Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.012897 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0551f53f-db50-4bbe-9cce-d605d03bd91f","Type":"ContainerDied","Data":"17c40888257c9c2b920e57634ed0522331e6cf5d56632ae5407a33fb3b6e5f9e"} Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.012918 4992 scope.go:117] "RemoveContainer" containerID="a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.013060 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.047783 4992 scope.go:117] "RemoveContainer" containerID="5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.054457 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.069678 4992 scope.go:117] "RemoveContainer" containerID="bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.076181 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.086165 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:14 crc kubenswrapper[4992]: E1211 08:45:14.086657 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="proxy-httpd" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.086683 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="proxy-httpd" Dec 11 08:45:14 crc kubenswrapper[4992]: E1211 08:45:14.086708 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="ceilometer-notification-agent" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.086718 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="ceilometer-notification-agent" Dec 11 08:45:14 crc kubenswrapper[4992]: E1211 08:45:14.086757 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="sg-core" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.086765 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="sg-core" Dec 11 08:45:14 crc kubenswrapper[4992]: E1211 08:45:14.086798 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="ceilometer-central-agent" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.086807 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="ceilometer-central-agent" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.087026 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="proxy-httpd" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.087049 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="ceilometer-central-agent" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.087067 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="ceilometer-notification-agent" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.087088 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" containerName="sg-core" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.089741 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.097083 4992 scope.go:117] "RemoveContainer" containerID="3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.100423 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.100432 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.116386 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0551f53f-db50-4bbe-9cce-d605d03bd91f" path="/var/lib/kubelet/pods/0551f53f-db50-4bbe-9cce-d605d03bd91f/volumes" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.117214 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.135707 4992 scope.go:117] "RemoveContainer" containerID="a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283" Dec 11 08:45:14 crc kubenswrapper[4992]: E1211 08:45:14.136152 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283\": container with ID starting with a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283 not found: ID does not exist" containerID="a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.136205 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283"} err="failed to get container status \"a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283\": rpc error: code = NotFound desc = could not find container \"a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283\": container with ID starting with a7afa8f1cbe44b48e9009ba0968e75dcb1a778777abb5730bad74de7a6adb283 not found: ID does not exist" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.136241 4992 scope.go:117] "RemoveContainer" containerID="5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f" Dec 11 08:45:14 crc kubenswrapper[4992]: E1211 08:45:14.136676 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f\": container with ID starting with 5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f not found: ID does not exist" containerID="5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.136716 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f"} err="failed to get container status \"5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f\": rpc error: code = NotFound desc = could not find container \"5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f\": container with ID starting with 5104060d5e8b422b7411f09a4f2fb77e3e761c6f680be190b81de8903a70ca3f not found: ID does not exist" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.136737 4992 scope.go:117] "RemoveContainer" containerID="bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6" Dec 11 08:45:14 crc kubenswrapper[4992]: E1211 08:45:14.137026 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6\": container with ID starting with bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6 not found: ID does not exist" containerID="bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.137049 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6"} err="failed to get container status \"bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6\": rpc error: code = NotFound desc = could not find container \"bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6\": container with ID starting with bbf71aa860781ba6030a51160081f2793c239010812ee5fd003c4d96a60730f6 not found: ID does not exist" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.137064 4992 scope.go:117] "RemoveContainer" containerID="3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d" Dec 11 08:45:14 crc kubenswrapper[4992]: E1211 08:45:14.137277 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d\": container with ID starting with 3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d not found: ID does not exist" containerID="3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.137298 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d"} err="failed to get container status \"3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d\": rpc error: code = NotFound desc = could not find container \"3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d\": container with ID starting with 3bff9fb03a91db0b12494e213d74ceaedd94e39405a10634490aac07c966301d not found: ID does not exist" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.245159 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5l5v\" (UniqueName: \"kubernetes.io/projected/5e578d8f-7c8a-43ab-8a29-70848890aeb1-kube-api-access-g5l5v\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.245356 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.245426 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-run-httpd\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.245700 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-log-httpd\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.245723 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-config-data\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.245818 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.245859 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-scripts\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.298196 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.340241 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.347090 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.347169 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-scripts\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.347200 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5l5v\" (UniqueName: \"kubernetes.io/projected/5e578d8f-7c8a-43ab-8a29-70848890aeb1-kube-api-access-g5l5v\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.347283 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.347354 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-run-httpd\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.347387 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-log-httpd\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.347410 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-config-data\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.348025 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-run-httpd\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.348139 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-log-httpd\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.352445 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.354211 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-scripts\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.355400 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.363769 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-config-data\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.364347 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5l5v\" (UniqueName: \"kubernetes.io/projected/5e578d8f-7c8a-43ab-8a29-70848890aeb1-kube-api-access-g5l5v\") pod \"ceilometer-0\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.415585 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:14 crc kubenswrapper[4992]: I1211 08:45:14.852751 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:14 crc kubenswrapper[4992]: W1211 08:45:14.858139 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e578d8f_7c8a_43ab_8a29_70848890aeb1.slice/crio-88f701d71fd60f28a66417325ff96615c472f0fe2221c0c782d00a4bff420672 WatchSource:0}: Error finding container 88f701d71fd60f28a66417325ff96615c472f0fe2221c0c782d00a4bff420672: Status 404 returned error can't find the container with id 88f701d71fd60f28a66417325ff96615c472f0fe2221c0c782d00a4bff420672 Dec 11 08:45:15 crc kubenswrapper[4992]: I1211 08:45:15.025257 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerStarted","Data":"88f701d71fd60f28a66417325ff96615c472f0fe2221c0c782d00a4bff420672"} Dec 11 08:45:15 crc kubenswrapper[4992]: I1211 08:45:15.025356 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="50b99209-7df9-4ae4-9795-37a362cd6373" containerName="cinder-scheduler" containerID="cri-o://4bc98baee37d97daf983b63111e603e6816ed0512660b0f1d97d0cd0d685272f" gracePeriod=30 Dec 11 08:45:15 crc kubenswrapper[4992]: I1211 08:45:15.025412 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="50b99209-7df9-4ae4-9795-37a362cd6373" containerName="probe" containerID="cri-o://cf0f4155cc7241e85d2d0abdd3f5046f0e1ee51e88cd64af6fbd2e334e53b804" gracePeriod=30 Dec 11 08:45:17 crc kubenswrapper[4992]: I1211 08:45:17.059548 4992 generic.go:334] "Generic (PLEG): container finished" podID="50b99209-7df9-4ae4-9795-37a362cd6373" containerID="cf0f4155cc7241e85d2d0abdd3f5046f0e1ee51e88cd64af6fbd2e334e53b804" exitCode=0 Dec 11 08:45:17 crc kubenswrapper[4992]: I1211 08:45:17.059612 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50b99209-7df9-4ae4-9795-37a362cd6373","Type":"ContainerDied","Data":"cf0f4155cc7241e85d2d0abdd3f5046f0e1ee51e88cd64af6fbd2e334e53b804"} Dec 11 08:45:18 crc kubenswrapper[4992]: I1211 08:45:18.070864 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerStarted","Data":"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817"} Dec 11 08:45:19 crc kubenswrapper[4992]: I1211 08:45:19.088787 4992 generic.go:334] "Generic (PLEG): container finished" podID="50b99209-7df9-4ae4-9795-37a362cd6373" containerID="4bc98baee37d97daf983b63111e603e6816ed0512660b0f1d97d0cd0d685272f" exitCode=0 Dec 11 08:45:19 crc kubenswrapper[4992]: I1211 08:45:19.088829 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50b99209-7df9-4ae4-9795-37a362cd6373","Type":"ContainerDied","Data":"4bc98baee37d97daf983b63111e603e6816ed0512660b0f1d97d0cd0d685272f"} Dec 11 08:45:20 crc kubenswrapper[4992]: I1211 08:45:20.077604 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:45:20 crc kubenswrapper[4992]: I1211 08:45:20.083398 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerName="glance-httpd" containerID="cri-o://5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f" gracePeriod=30 Dec 11 08:45:20 crc kubenswrapper[4992]: I1211 08:45:20.083401 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerName="glance-log" containerID="cri-o://9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047" gracePeriod=30 Dec 11 08:45:21 crc kubenswrapper[4992]: I1211 08:45:21.121951 4992 generic.go:334] "Generic (PLEG): container finished" podID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerID="9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047" exitCode=143 Dec 11 08:45:21 crc kubenswrapper[4992]: I1211 08:45:21.122003 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb31451d-5ece-4d9a-a6ad-b781668ecbdb","Type":"ContainerDied","Data":"9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047"} Dec 11 08:45:21 crc kubenswrapper[4992]: I1211 08:45:21.136401 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:45:21 crc kubenswrapper[4992]: I1211 08:45:21.138106 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerName="glance-log" containerID="cri-o://0701fd4db1efc8c804ad56c7e9b57ca640a87b64838528c17632b9a37c920e05" gracePeriod=30 Dec 11 08:45:21 crc kubenswrapper[4992]: I1211 08:45:21.138177 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerName="glance-httpd" containerID="cri-o://f2ef47261e2fb1871d241e57ce9e1cb95d90b4e3cefc4ce17773c1642c13c61c" gracePeriod=30 Dec 11 08:45:21 crc kubenswrapper[4992]: I1211 08:45:21.737451 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.017908 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.102525 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data\") pod \"50b99209-7df9-4ae4-9795-37a362cd6373\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.103921 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmgz7\" (UniqueName: \"kubernetes.io/projected/50b99209-7df9-4ae4-9795-37a362cd6373-kube-api-access-rmgz7\") pod \"50b99209-7df9-4ae4-9795-37a362cd6373\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.104573 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-scripts\") pod \"50b99209-7df9-4ae4-9795-37a362cd6373\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.104608 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-combined-ca-bundle\") pod \"50b99209-7df9-4ae4-9795-37a362cd6373\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.104687 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data-custom\") pod \"50b99209-7df9-4ae4-9795-37a362cd6373\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.104714 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50b99209-7df9-4ae4-9795-37a362cd6373-etc-machine-id\") pod \"50b99209-7df9-4ae4-9795-37a362cd6373\" (UID: \"50b99209-7df9-4ae4-9795-37a362cd6373\") " Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.105419 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50b99209-7df9-4ae4-9795-37a362cd6373-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "50b99209-7df9-4ae4-9795-37a362cd6373" (UID: "50b99209-7df9-4ae4-9795-37a362cd6373"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.108462 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50b99209-7df9-4ae4-9795-37a362cd6373-kube-api-access-rmgz7" (OuterVolumeSpecName: "kube-api-access-rmgz7") pod "50b99209-7df9-4ae4-9795-37a362cd6373" (UID: "50b99209-7df9-4ae4-9795-37a362cd6373"). InnerVolumeSpecName "kube-api-access-rmgz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.110192 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "50b99209-7df9-4ae4-9795-37a362cd6373" (UID: "50b99209-7df9-4ae4-9795-37a362cd6373"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.110650 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-scripts" (OuterVolumeSpecName: "scripts") pod "50b99209-7df9-4ae4-9795-37a362cd6373" (UID: "50b99209-7df9-4ae4-9795-37a362cd6373"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.138990 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n56r8" event={"ID":"8ead296a-d746-4d8b-a8c5-b51c08bf2422","Type":"ContainerStarted","Data":"161173e658d40330fe5968c18371acedb040a57ecb50a9236c73daca5cf866ab"} Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.149265 4992 generic.go:334] "Generic (PLEG): container finished" podID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerID="0701fd4db1efc8c804ad56c7e9b57ca640a87b64838528c17632b9a37c920e05" exitCode=143 Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.149344 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd74de0b-d6c0-4892-befc-cd81b18a63ad","Type":"ContainerDied","Data":"0701fd4db1efc8c804ad56c7e9b57ca640a87b64838528c17632b9a37c920e05"} Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.152728 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"50b99209-7df9-4ae4-9795-37a362cd6373","Type":"ContainerDied","Data":"7d2867949cdd2c4b986ad6a52524094a51b8161d66fba4443e3a4c29dd00b9cd"} Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.152779 4992 scope.go:117] "RemoveContainer" containerID="cf0f4155cc7241e85d2d0abdd3f5046f0e1ee51e88cd64af6fbd2e334e53b804" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.153848 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.162706 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50b99209-7df9-4ae4-9795-37a362cd6373" (UID: "50b99209-7df9-4ae4-9795-37a362cd6373"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.162822 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-n56r8" podStartSLOduration=1.5434635399999999 podStartE2EDuration="11.162805488s" podCreationTimestamp="2025-12-11 08:45:11 +0000 UTC" firstStartedPulling="2025-12-11 08:45:12.170261608 +0000 UTC m=+1336.429735534" lastFinishedPulling="2025-12-11 08:45:21.789603556 +0000 UTC m=+1346.049077482" observedRunningTime="2025-12-11 08:45:22.157028237 +0000 UTC m=+1346.416502173" watchObservedRunningTime="2025-12-11 08:45:22.162805488 +0000 UTC m=+1346.422279414" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.181321 4992 scope.go:117] "RemoveContainer" containerID="4bc98baee37d97daf983b63111e603e6816ed0512660b0f1d97d0cd0d685272f" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.207423 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmgz7\" (UniqueName: \"kubernetes.io/projected/50b99209-7df9-4ae4-9795-37a362cd6373-kube-api-access-rmgz7\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.207450 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.207459 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.207468 4992 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.207476 4992 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/50b99209-7df9-4ae4-9795-37a362cd6373-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.216935 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data" (OuterVolumeSpecName: "config-data") pod "50b99209-7df9-4ae4-9795-37a362cd6373" (UID: "50b99209-7df9-4ae4-9795-37a362cd6373"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.309153 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50b99209-7df9-4ae4-9795-37a362cd6373-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.508421 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.517475 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.527752 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 08:45:22 crc kubenswrapper[4992]: E1211 08:45:22.528227 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b99209-7df9-4ae4-9795-37a362cd6373" containerName="probe" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.528252 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b99209-7df9-4ae4-9795-37a362cd6373" containerName="probe" Dec 11 08:45:22 crc kubenswrapper[4992]: E1211 08:45:22.528274 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50b99209-7df9-4ae4-9795-37a362cd6373" containerName="cinder-scheduler" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.528284 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="50b99209-7df9-4ae4-9795-37a362cd6373" containerName="cinder-scheduler" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.528515 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b99209-7df9-4ae4-9795-37a362cd6373" containerName="probe" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.528557 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="50b99209-7df9-4ae4-9795-37a362cd6373" containerName="cinder-scheduler" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.529777 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.533040 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.535068 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.618073 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.618143 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-config-data\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.618306 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.618414 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-scripts\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.618685 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e65777ee-c1c8-48f5-a103-539738e7c293-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.619043 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmqqc\" (UniqueName: \"kubernetes.io/projected/e65777ee-c1c8-48f5-a103-539738e7c293-kube-api-access-wmqqc\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.720995 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e65777ee-c1c8-48f5-a103-539738e7c293-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.721331 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmqqc\" (UniqueName: \"kubernetes.io/projected/e65777ee-c1c8-48f5-a103-539738e7c293-kube-api-access-wmqqc\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.721373 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.721411 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-config-data\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.721446 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.721471 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-scripts\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.725205 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e65777ee-c1c8-48f5-a103-539738e7c293-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.726130 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-scripts\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.727332 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.729483 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.729704 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e65777ee-c1c8-48f5-a103-539738e7c293-config-data\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.749087 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmqqc\" (UniqueName: \"kubernetes.io/projected/e65777ee-c1c8-48f5-a103-539738e7c293-kube-api-access-wmqqc\") pod \"cinder-scheduler-0\" (UID: \"e65777ee-c1c8-48f5-a103-539738e7c293\") " pod="openstack/cinder-scheduler-0" Dec 11 08:45:22 crc kubenswrapper[4992]: I1211 08:45:22.851312 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 08:45:23 crc kubenswrapper[4992]: I1211 08:45:23.165187 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerStarted","Data":"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6"} Dec 11 08:45:23 crc kubenswrapper[4992]: I1211 08:45:23.329098 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.106877 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50b99209-7df9-4ae4-9795-37a362cd6373" path="/var/lib/kubelet/pods/50b99209-7df9-4ae4-9795-37a362cd6373/volumes" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.159467 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.177346 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e65777ee-c1c8-48f5-a103-539738e7c293","Type":"ContainerStarted","Data":"b5215ee73d5653ccedb00e892561fc3fa61f00b3a402969da73b418c3825fda5"} Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.181232 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerStarted","Data":"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19"} Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.192895 4992 generic.go:334] "Generic (PLEG): container finished" podID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerID="5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f" exitCode=0 Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.192951 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb31451d-5ece-4d9a-a6ad-b781668ecbdb","Type":"ContainerDied","Data":"5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f"} Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.192983 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb31451d-5ece-4d9a-a6ad-b781668ecbdb","Type":"ContainerDied","Data":"87636abfeccd6213d94c7148eadd2d8293e8c02410df87a6b3c0d813396765f0"} Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.193004 4992 scope.go:117] "RemoveContainer" containerID="5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.193194 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.231899 4992 scope.go:117] "RemoveContainer" containerID="9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.260832 4992 scope.go:117] "RemoveContainer" containerID="5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f" Dec 11 08:45:24 crc kubenswrapper[4992]: E1211 08:45:24.261293 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f\": container with ID starting with 5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f not found: ID does not exist" containerID="5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.261322 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f"} err="failed to get container status \"5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f\": rpc error: code = NotFound desc = could not find container \"5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f\": container with ID starting with 5a327e66c736886195e002e78c584e4a99b55f8f2658d841bff273d02d066a0f not found: ID does not exist" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.261342 4992 scope.go:117] "RemoveContainer" containerID="9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.262166 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-combined-ca-bundle\") pod \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.262220 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-logs\") pod \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.262311 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-config-data\") pod \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.262383 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-scripts\") pod \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.262442 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-public-tls-certs\") pod \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.262511 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-httpd-run\") pod \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.262557 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbwgp\" (UniqueName: \"kubernetes.io/projected/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-kube-api-access-fbwgp\") pod \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.262584 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\" (UID: \"eb31451d-5ece-4d9a-a6ad-b781668ecbdb\") " Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.263300 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-logs" (OuterVolumeSpecName: "logs") pod "eb31451d-5ece-4d9a-a6ad-b781668ecbdb" (UID: "eb31451d-5ece-4d9a-a6ad-b781668ecbdb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.263760 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:24 crc kubenswrapper[4992]: E1211 08:45:24.264730 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047\": container with ID starting with 9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047 not found: ID does not exist" containerID="9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.264735 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb31451d-5ece-4d9a-a6ad-b781668ecbdb" (UID: "eb31451d-5ece-4d9a-a6ad-b781668ecbdb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.264755 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047"} err="failed to get container status \"9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047\": rpc error: code = NotFound desc = could not find container \"9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047\": container with ID starting with 9c64d2868b094d0afeb16672ffd5286d3b638bc3972938ba5d704f26ddf5f047 not found: ID does not exist" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.275831 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-scripts" (OuterVolumeSpecName: "scripts") pod "eb31451d-5ece-4d9a-a6ad-b781668ecbdb" (UID: "eb31451d-5ece-4d9a-a6ad-b781668ecbdb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.276279 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-kube-api-access-fbwgp" (OuterVolumeSpecName: "kube-api-access-fbwgp") pod "eb31451d-5ece-4d9a-a6ad-b781668ecbdb" (UID: "eb31451d-5ece-4d9a-a6ad-b781668ecbdb"). InnerVolumeSpecName "kube-api-access-fbwgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.289334 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "eb31451d-5ece-4d9a-a6ad-b781668ecbdb" (UID: "eb31451d-5ece-4d9a-a6ad-b781668ecbdb"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.299142 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb31451d-5ece-4d9a-a6ad-b781668ecbdb" (UID: "eb31451d-5ece-4d9a-a6ad-b781668ecbdb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.331490 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb31451d-5ece-4d9a-a6ad-b781668ecbdb" (UID: "eb31451d-5ece-4d9a-a6ad-b781668ecbdb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.337875 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-config-data" (OuterVolumeSpecName: "config-data") pod "eb31451d-5ece-4d9a-a6ad-b781668ecbdb" (UID: "eb31451d-5ece-4d9a-a6ad-b781668ecbdb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.365720 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.365761 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.365770 4992 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.365780 4992 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.365790 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbwgp\" (UniqueName: \"kubernetes.io/projected/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-kube-api-access-fbwgp\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.365823 4992 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.365832 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb31451d-5ece-4d9a-a6ad-b781668ecbdb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.392177 4992 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.468063 4992 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.549246 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.563967 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.584492 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:45:24 crc kubenswrapper[4992]: E1211 08:45:24.588110 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerName="glance-httpd" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.588135 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerName="glance-httpd" Dec 11 08:45:24 crc kubenswrapper[4992]: E1211 08:45:24.588162 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerName="glance-log" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.588169 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerName="glance-log" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.588349 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerName="glance-log" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.588374 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" containerName="glance-httpd" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.592880 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.597262 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.597490 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.601602 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.777103 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.777156 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.777328 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/112fb236-1ef9-4991-b83c-91c1081483fc-logs\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.777387 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/112fb236-1ef9-4991-b83c-91c1081483fc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.777518 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-config-data\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.777626 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-scripts\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.777687 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4w86\" (UniqueName: \"kubernetes.io/projected/112fb236-1ef9-4991-b83c-91c1081483fc-kube-api-access-g4w86\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.777714 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.879289 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-scripts\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.879603 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4w86\" (UniqueName: \"kubernetes.io/projected/112fb236-1ef9-4991-b83c-91c1081483fc-kube-api-access-g4w86\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.879626 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.879696 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.879721 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.879761 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/112fb236-1ef9-4991-b83c-91c1081483fc-logs\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.879782 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/112fb236-1ef9-4991-b83c-91c1081483fc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.879819 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-config-data\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.880193 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.882992 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/112fb236-1ef9-4991-b83c-91c1081483fc-logs\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.883102 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/112fb236-1ef9-4991-b83c-91c1081483fc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.887165 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-config-data\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.887254 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-scripts\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.890964 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.891563 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112fb236-1ef9-4991-b83c-91c1081483fc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.898354 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4w86\" (UniqueName: \"kubernetes.io/projected/112fb236-1ef9-4991-b83c-91c1081483fc-kube-api-access-g4w86\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.924087 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"112fb236-1ef9-4991-b83c-91c1081483fc\") " pod="openstack/glance-default-external-api-0" Dec 11 08:45:24 crc kubenswrapper[4992]: I1211 08:45:24.953286 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.248072 4992 generic.go:334] "Generic (PLEG): container finished" podID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerID="f2ef47261e2fb1871d241e57ce9e1cb95d90b4e3cefc4ce17773c1642c13c61c" exitCode=0 Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.248503 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd74de0b-d6c0-4892-befc-cd81b18a63ad","Type":"ContainerDied","Data":"f2ef47261e2fb1871d241e57ce9e1cb95d90b4e3cefc4ce17773c1642c13c61c"} Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.253987 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e65777ee-c1c8-48f5-a103-539738e7c293","Type":"ContainerStarted","Data":"5d857ea13065f7222de74c1e1c1b4c0e8523b9a15bccd596970724837a4d7a7e"} Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.277061 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.392448 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.392568 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-config-data\") pod \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.392614 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-httpd-run\") pod \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.392646 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-combined-ca-bundle\") pod \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.392679 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-logs\") pod \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.392703 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-internal-tls-certs\") pod \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.392776 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djbkg\" (UniqueName: \"kubernetes.io/projected/fd74de0b-d6c0-4892-befc-cd81b18a63ad-kube-api-access-djbkg\") pod \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.392792 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-scripts\") pod \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\" (UID: \"fd74de0b-d6c0-4892-befc-cd81b18a63ad\") " Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.393437 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd74de0b-d6c0-4892-befc-cd81b18a63ad" (UID: "fd74de0b-d6c0-4892-befc-cd81b18a63ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.393605 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-logs" (OuterVolumeSpecName: "logs") pod "fd74de0b-d6c0-4892-befc-cd81b18a63ad" (UID: "fd74de0b-d6c0-4892-befc-cd81b18a63ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.402533 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd74de0b-d6c0-4892-befc-cd81b18a63ad-kube-api-access-djbkg" (OuterVolumeSpecName: "kube-api-access-djbkg") pod "fd74de0b-d6c0-4892-befc-cd81b18a63ad" (UID: "fd74de0b-d6c0-4892-befc-cd81b18a63ad"). InnerVolumeSpecName "kube-api-access-djbkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.406749 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-scripts" (OuterVolumeSpecName: "scripts") pod "fd74de0b-d6c0-4892-befc-cd81b18a63ad" (UID: "fd74de0b-d6c0-4892-befc-cd81b18a63ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.407150 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "fd74de0b-d6c0-4892-befc-cd81b18a63ad" (UID: "fd74de0b-d6c0-4892-befc-cd81b18a63ad"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.495161 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djbkg\" (UniqueName: \"kubernetes.io/projected/fd74de0b-d6c0-4892-befc-cd81b18a63ad-kube-api-access-djbkg\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.495191 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.495240 4992 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.495251 4992 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.495261 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd74de0b-d6c0-4892-befc-cd81b18a63ad-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.500285 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd74de0b-d6c0-4892-befc-cd81b18a63ad" (UID: "fd74de0b-d6c0-4892-befc-cd81b18a63ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.589241 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.597734 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.652802 4992 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.673721 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fd74de0b-d6c0-4892-befc-cd81b18a63ad" (UID: "fd74de0b-d6c0-4892-befc-cd81b18a63ad"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.682902 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-config-data" (OuterVolumeSpecName: "config-data") pod "fd74de0b-d6c0-4892-befc-cd81b18a63ad" (UID: "fd74de0b-d6c0-4892-befc-cd81b18a63ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.699439 4992 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.699478 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:25 crc kubenswrapper[4992]: I1211 08:45:25.699496 4992 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd74de0b-d6c0-4892-befc-cd81b18a63ad-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.106553 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb31451d-5ece-4d9a-a6ad-b781668ecbdb" path="/var/lib/kubelet/pods/eb31451d-5ece-4d9a-a6ad-b781668ecbdb/volumes" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.277463 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerStarted","Data":"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98"} Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.278006 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="ceilometer-central-agent" containerID="cri-o://3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817" gracePeriod=30 Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.278127 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.278733 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="proxy-httpd" containerID="cri-o://6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98" gracePeriod=30 Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.278894 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="ceilometer-notification-agent" containerID="cri-o://29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6" gracePeriod=30 Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.278940 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="sg-core" containerID="cri-o://70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19" gracePeriod=30 Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.283681 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"112fb236-1ef9-4991-b83c-91c1081483fc","Type":"ContainerStarted","Data":"bf3fbab7254be03d0278b554ab40b0268a3e4d41ccccde5ad590f30da9e01753"} Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.283754 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"112fb236-1ef9-4991-b83c-91c1081483fc","Type":"ContainerStarted","Data":"e1db13e6586d9c46f82dae39b0b76a43eb8e32405034323db4c58bdd9a58512e"} Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.304792 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.049627281 podStartE2EDuration="12.304772666s" podCreationTimestamp="2025-12-11 08:45:14 +0000 UTC" firstStartedPulling="2025-12-11 08:45:14.86087092 +0000 UTC m=+1339.120344836" lastFinishedPulling="2025-12-11 08:45:25.116016295 +0000 UTC m=+1349.375490221" observedRunningTime="2025-12-11 08:45:26.296252577 +0000 UTC m=+1350.555726523" watchObservedRunningTime="2025-12-11 08:45:26.304772666 +0000 UTC m=+1350.564246602" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.310620 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd74de0b-d6c0-4892-befc-cd81b18a63ad","Type":"ContainerDied","Data":"ac9cf8286ffb3b087d03f0fae8c271412d82262c273ee09222a181e447a55be2"} Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.310709 4992 scope.go:117] "RemoveContainer" containerID="f2ef47261e2fb1871d241e57ce9e1cb95d90b4e3cefc4ce17773c1642c13c61c" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.310863 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.321778 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e65777ee-c1c8-48f5-a103-539738e7c293","Type":"ContainerStarted","Data":"5d5770b23463efeca24a79a768ac2105fff34442208bb6d82777ca7c7b889cb1"} Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.361476 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.361454564 podStartE2EDuration="4.361454564s" podCreationTimestamp="2025-12-11 08:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:26.348109068 +0000 UTC m=+1350.607582994" watchObservedRunningTime="2025-12-11 08:45:26.361454564 +0000 UTC m=+1350.620928490" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.386981 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.401858 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.411791 4992 scope.go:117] "RemoveContainer" containerID="0701fd4db1efc8c804ad56c7e9b57ca640a87b64838528c17632b9a37c920e05" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.414908 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:45:26 crc kubenswrapper[4992]: E1211 08:45:26.415590 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerName="glance-log" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.415612 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerName="glance-log" Dec 11 08:45:26 crc kubenswrapper[4992]: E1211 08:45:26.415655 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerName="glance-httpd" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.415664 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerName="glance-httpd" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.416011 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerName="glance-httpd" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.416056 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" containerName="glance-log" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.417565 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.437125 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.437265 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.468314 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.515980 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8264506-1cab-488a-903d-43a6062db6ae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.516065 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.516116 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.516135 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.516153 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7z2\" (UniqueName: \"kubernetes.io/projected/b8264506-1cab-488a-903d-43a6062db6ae-kube-api-access-fl7z2\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.516193 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8264506-1cab-488a-903d-43a6062db6ae-logs\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.516513 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.516560 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.618552 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.618613 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.618659 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.618678 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl7z2\" (UniqueName: \"kubernetes.io/projected/b8264506-1cab-488a-903d-43a6062db6ae-kube-api-access-fl7z2\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.618705 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8264506-1cab-488a-903d-43a6062db6ae-logs\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.618780 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.618797 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.618841 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8264506-1cab-488a-903d-43a6062db6ae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.619315 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8264506-1cab-488a-903d-43a6062db6ae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.619495 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.619526 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8264506-1cab-488a-903d-43a6062db6ae-logs\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.623448 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.623846 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.627386 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.629620 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8264506-1cab-488a-903d-43a6062db6ae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.642951 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl7z2\" (UniqueName: \"kubernetes.io/projected/b8264506-1cab-488a-903d-43a6062db6ae-kube-api-access-fl7z2\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.651751 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"b8264506-1cab-488a-903d-43a6062db6ae\") " pod="openstack/glance-default-internal-api-0" Dec 11 08:45:26 crc kubenswrapper[4992]: I1211 08:45:26.777109 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.299454 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.332113 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-scripts\") pod \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.332170 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-combined-ca-bundle\") pod \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.332197 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-config-data\") pod \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.332258 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-run-httpd\") pod \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.332292 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5l5v\" (UniqueName: \"kubernetes.io/projected/5e578d8f-7c8a-43ab-8a29-70848890aeb1-kube-api-access-g5l5v\") pod \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.332397 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-sg-core-conf-yaml\") pod \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.332444 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-log-httpd\") pod \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\" (UID: \"5e578d8f-7c8a-43ab-8a29-70848890aeb1\") " Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.333300 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5e578d8f-7c8a-43ab-8a29-70848890aeb1" (UID: "5e578d8f-7c8a-43ab-8a29-70848890aeb1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.333603 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5e578d8f-7c8a-43ab-8a29-70848890aeb1" (UID: "5e578d8f-7c8a-43ab-8a29-70848890aeb1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.339706 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-scripts" (OuterVolumeSpecName: "scripts") pod "5e578d8f-7c8a-43ab-8a29-70848890aeb1" (UID: "5e578d8f-7c8a-43ab-8a29-70848890aeb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344436 4992 generic.go:334] "Generic (PLEG): container finished" podID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerID="6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98" exitCode=0 Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344472 4992 generic.go:334] "Generic (PLEG): container finished" podID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerID="70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19" exitCode=2 Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344481 4992 generic.go:334] "Generic (PLEG): container finished" podID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerID="29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6" exitCode=0 Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344494 4992 generic.go:334] "Generic (PLEG): container finished" podID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerID="3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817" exitCode=0 Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344579 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344587 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerDied","Data":"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98"} Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344737 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerDied","Data":"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19"} Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344756 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerDied","Data":"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6"} Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344767 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerDied","Data":"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817"} Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344779 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e578d8f-7c8a-43ab-8a29-70848890aeb1","Type":"ContainerDied","Data":"88f701d71fd60f28a66417325ff96615c472f0fe2221c0c782d00a4bff420672"} Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.344798 4992 scope.go:117] "RemoveContainer" containerID="6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.349890 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e578d8f-7c8a-43ab-8a29-70848890aeb1-kube-api-access-g5l5v" (OuterVolumeSpecName: "kube-api-access-g5l5v") pod "5e578d8f-7c8a-43ab-8a29-70848890aeb1" (UID: "5e578d8f-7c8a-43ab-8a29-70848890aeb1"). InnerVolumeSpecName "kube-api-access-g5l5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.355768 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"112fb236-1ef9-4991-b83c-91c1081483fc","Type":"ContainerStarted","Data":"590a6994454c9b17f35ab7992e0402209638ae0053101649985fc9ebf9df1f02"} Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.376815 4992 scope.go:117] "RemoveContainer" containerID="70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.386095 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5e578d8f-7c8a-43ab-8a29-70848890aeb1" (UID: "5e578d8f-7c8a-43ab-8a29-70848890aeb1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.394757 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.399077 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.399054272 podStartE2EDuration="3.399054272s" podCreationTimestamp="2025-12-11 08:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:27.377162447 +0000 UTC m=+1351.636636383" watchObservedRunningTime="2025-12-11 08:45:27.399054272 +0000 UTC m=+1351.658528198" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.410874 4992 scope.go:117] "RemoveContainer" containerID="29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.431763 4992 scope.go:117] "RemoveContainer" containerID="3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.434416 4992 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.434625 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5l5v\" (UniqueName: \"kubernetes.io/projected/5e578d8f-7c8a-43ab-8a29-70848890aeb1-kube-api-access-g5l5v\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.434780 4992 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.434926 4992 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e578d8f-7c8a-43ab-8a29-70848890aeb1-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.434971 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.462764 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e578d8f-7c8a-43ab-8a29-70848890aeb1" (UID: "5e578d8f-7c8a-43ab-8a29-70848890aeb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.466782 4992 scope.go:117] "RemoveContainer" containerID="6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98" Dec 11 08:45:27 crc kubenswrapper[4992]: E1211 08:45:27.467668 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98\": container with ID starting with 6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98 not found: ID does not exist" containerID="6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.467708 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98"} err="failed to get container status \"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98\": rpc error: code = NotFound desc = could not find container \"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98\": container with ID starting with 6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.467734 4992 scope.go:117] "RemoveContainer" containerID="70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19" Dec 11 08:45:27 crc kubenswrapper[4992]: E1211 08:45:27.468070 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19\": container with ID starting with 70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19 not found: ID does not exist" containerID="70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.468094 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19"} err="failed to get container status \"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19\": rpc error: code = NotFound desc = could not find container \"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19\": container with ID starting with 70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.468109 4992 scope.go:117] "RemoveContainer" containerID="29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6" Dec 11 08:45:27 crc kubenswrapper[4992]: E1211 08:45:27.468681 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6\": container with ID starting with 29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6 not found: ID does not exist" containerID="29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.468706 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6"} err="failed to get container status \"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6\": rpc error: code = NotFound desc = could not find container \"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6\": container with ID starting with 29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.468729 4992 scope.go:117] "RemoveContainer" containerID="3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817" Dec 11 08:45:27 crc kubenswrapper[4992]: E1211 08:45:27.469732 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817\": container with ID starting with 3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817 not found: ID does not exist" containerID="3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.469760 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817"} err="failed to get container status \"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817\": rpc error: code = NotFound desc = could not find container \"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817\": container with ID starting with 3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.469783 4992 scope.go:117] "RemoveContainer" containerID="6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.470193 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98"} err="failed to get container status \"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98\": rpc error: code = NotFound desc = could not find container \"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98\": container with ID starting with 6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.470217 4992 scope.go:117] "RemoveContainer" containerID="70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.470583 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19"} err="failed to get container status \"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19\": rpc error: code = NotFound desc = could not find container \"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19\": container with ID starting with 70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.470654 4992 scope.go:117] "RemoveContainer" containerID="29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.470962 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6"} err="failed to get container status \"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6\": rpc error: code = NotFound desc = could not find container \"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6\": container with ID starting with 29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.470992 4992 scope.go:117] "RemoveContainer" containerID="3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.471226 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817"} err="failed to get container status \"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817\": rpc error: code = NotFound desc = could not find container \"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817\": container with ID starting with 3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.471252 4992 scope.go:117] "RemoveContainer" containerID="6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.471623 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98"} err="failed to get container status \"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98\": rpc error: code = NotFound desc = could not find container \"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98\": container with ID starting with 6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.471660 4992 scope.go:117] "RemoveContainer" containerID="70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.471839 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19"} err="failed to get container status \"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19\": rpc error: code = NotFound desc = could not find container \"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19\": container with ID starting with 70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.471863 4992 scope.go:117] "RemoveContainer" containerID="29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.472190 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6"} err="failed to get container status \"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6\": rpc error: code = NotFound desc = could not find container \"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6\": container with ID starting with 29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.472219 4992 scope.go:117] "RemoveContainer" containerID="3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.472698 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817"} err="failed to get container status \"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817\": rpc error: code = NotFound desc = could not find container \"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817\": container with ID starting with 3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.472723 4992 scope.go:117] "RemoveContainer" containerID="6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.473349 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98"} err="failed to get container status \"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98\": rpc error: code = NotFound desc = could not find container \"6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98\": container with ID starting with 6675d58ad79eef64367d133e56f82930d1ffd092b5f783455456c64d90e14e98 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.473374 4992 scope.go:117] "RemoveContainer" containerID="70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.473556 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19"} err="failed to get container status \"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19\": rpc error: code = NotFound desc = could not find container \"70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19\": container with ID starting with 70ef1f388b7444c6dd746a869d664548ed9f3ba4066fe473c41f69933314ca19 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.473568 4992 scope.go:117] "RemoveContainer" containerID="29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.473949 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6"} err="failed to get container status \"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6\": rpc error: code = NotFound desc = could not find container \"29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6\": container with ID starting with 29e8dd08a7abeb608c6011f38660baf5a7a27dcbfd9982f430f961b53dc545d6 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.473965 4992 scope.go:117] "RemoveContainer" containerID="3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.474132 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817"} err="failed to get container status \"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817\": rpc error: code = NotFound desc = could not find container \"3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817\": container with ID starting with 3254c809f46013778b1e1ec2e5e1dc4ffedc0fdca7daaf7a5ea106eb7b05b817 not found: ID does not exist" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.498785 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-config-data" (OuterVolumeSpecName: "config-data") pod "5e578d8f-7c8a-43ab-8a29-70848890aeb1" (UID: "5e578d8f-7c8a-43ab-8a29-70848890aeb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.536837 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.536870 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e578d8f-7c8a-43ab-8a29-70848890aeb1-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.790118 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.799231 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.854230 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.860938 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:27 crc kubenswrapper[4992]: E1211 08:45:27.861311 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="proxy-httpd" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.861327 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="proxy-httpd" Dec 11 08:45:27 crc kubenswrapper[4992]: E1211 08:45:27.861344 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="ceilometer-notification-agent" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.861350 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="ceilometer-notification-agent" Dec 11 08:45:27 crc kubenswrapper[4992]: E1211 08:45:27.861362 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="ceilometer-central-agent" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.861368 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="ceilometer-central-agent" Dec 11 08:45:27 crc kubenswrapper[4992]: E1211 08:45:27.861387 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="sg-core" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.861393 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="sg-core" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.861559 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="ceilometer-central-agent" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.861573 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="ceilometer-notification-agent" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.861582 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="sg-core" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.861598 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" containerName="proxy-httpd" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.863234 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.876121 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.880088 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.897540 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.949683 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.949756 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9mc2\" (UniqueName: \"kubernetes.io/projected/bfc078c8-8507-4f89-a5dc-f91dd9877148-kube-api-access-v9mc2\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.949804 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-log-httpd\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.949838 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-scripts\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.949871 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-run-httpd\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.949908 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-config-data\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:27 crc kubenswrapper[4992]: I1211 08:45:27.949957 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.051498 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.051604 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.052048 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9mc2\" (UniqueName: \"kubernetes.io/projected/bfc078c8-8507-4f89-a5dc-f91dd9877148-kube-api-access-v9mc2\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.052169 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-log-httpd\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.052227 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-scripts\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.052305 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-run-httpd\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.052395 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-config-data\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.052579 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-log-httpd\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.052954 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-run-httpd\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.058359 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.059911 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.062306 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-config-data\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.063046 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-scripts\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.070087 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9mc2\" (UniqueName: \"kubernetes.io/projected/bfc078c8-8507-4f89-a5dc-f91dd9877148-kube-api-access-v9mc2\") pod \"ceilometer-0\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.107240 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e578d8f-7c8a-43ab-8a29-70848890aeb1" path="/var/lib/kubelet/pods/5e578d8f-7c8a-43ab-8a29-70848890aeb1/volumes" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.109800 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd74de0b-d6c0-4892-befc-cd81b18a63ad" path="/var/lib/kubelet/pods/fd74de0b-d6c0-4892-befc-cd81b18a63ad/volumes" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.188760 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.377785 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b8264506-1cab-488a-903d-43a6062db6ae","Type":"ContainerStarted","Data":"39b37355aa6f04df393ad35f51606d298b60431b74c24a405d23d5a45ef10fc2"} Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.685410 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:28 crc kubenswrapper[4992]: I1211 08:45:28.812202 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:29 crc kubenswrapper[4992]: I1211 08:45:29.015151 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:45:29 crc kubenswrapper[4992]: I1211 08:45:29.410778 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerStarted","Data":"701d6a77c11fb06668f6fd8a29d792a5885c4f5038e67b2266e49332cc3ff1f9"} Dec 11 08:45:29 crc kubenswrapper[4992]: I1211 08:45:29.414464 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b8264506-1cab-488a-903d-43a6062db6ae","Type":"ContainerStarted","Data":"06654bac11a73f25656a119cbdc101ebf671f35f4cf5c3116888f5b5c5433cb1"} Dec 11 08:45:30 crc kubenswrapper[4992]: I1211 08:45:30.429454 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerStarted","Data":"506b060d2917f370e4a1399c73862d6f6fcf36168d2718461db600e964dec046"} Dec 11 08:45:30 crc kubenswrapper[4992]: I1211 08:45:30.430124 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerStarted","Data":"5cb6c5ad22b5d24283bd0c1af06202bc9275eb1b645655a8d717fb9f9b58f498"} Dec 11 08:45:30 crc kubenswrapper[4992]: I1211 08:45:30.431626 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b8264506-1cab-488a-903d-43a6062db6ae","Type":"ContainerStarted","Data":"3d4d4ad1b1c48869e09b7f7f71ca6526b971fec7585635d0b291e65c1f85792f"} Dec 11 08:45:30 crc kubenswrapper[4992]: I1211 08:45:30.459241 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.459215399 podStartE2EDuration="4.459215399s" podCreationTimestamp="2025-12-11 08:45:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:30.450182017 +0000 UTC m=+1354.709655943" watchObservedRunningTime="2025-12-11 08:45:30.459215399 +0000 UTC m=+1354.718689325" Dec 11 08:45:31 crc kubenswrapper[4992]: I1211 08:45:31.442822 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerStarted","Data":"9cfa2a2c36391661fa52f0a85cadb1e6ca027122c5d03f0d9072c6d8181c281e"} Dec 11 08:45:33 crc kubenswrapper[4992]: I1211 08:45:33.086496 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 08:45:33 crc kubenswrapper[4992]: I1211 08:45:33.460554 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerStarted","Data":"d512ba76b4c044986e3b2535ff3fd6c441b9465ed8a4d17cb434bbe48a3a4000"} Dec 11 08:45:33 crc kubenswrapper[4992]: I1211 08:45:33.460763 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="sg-core" containerID="cri-o://9cfa2a2c36391661fa52f0a85cadb1e6ca027122c5d03f0d9072c6d8181c281e" gracePeriod=30 Dec 11 08:45:33 crc kubenswrapper[4992]: I1211 08:45:33.460834 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="proxy-httpd" containerID="cri-o://d512ba76b4c044986e3b2535ff3fd6c441b9465ed8a4d17cb434bbe48a3a4000" gracePeriod=30 Dec 11 08:45:33 crc kubenswrapper[4992]: I1211 08:45:33.460791 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="ceilometer-notification-agent" containerID="cri-o://506b060d2917f370e4a1399c73862d6f6fcf36168d2718461db600e964dec046" gracePeriod=30 Dec 11 08:45:33 crc kubenswrapper[4992]: I1211 08:45:33.460988 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 08:45:33 crc kubenswrapper[4992]: I1211 08:45:33.460719 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="ceilometer-central-agent" containerID="cri-o://5cb6c5ad22b5d24283bd0c1af06202bc9275eb1b645655a8d717fb9f9b58f498" gracePeriod=30 Dec 11 08:45:33 crc kubenswrapper[4992]: I1211 08:45:33.487587 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.601687091 podStartE2EDuration="6.487562855s" podCreationTimestamp="2025-12-11 08:45:27 +0000 UTC" firstStartedPulling="2025-12-11 08:45:28.68685007 +0000 UTC m=+1352.946323996" lastFinishedPulling="2025-12-11 08:45:32.572725834 +0000 UTC m=+1356.832199760" observedRunningTime="2025-12-11 08:45:33.483988047 +0000 UTC m=+1357.743461973" watchObservedRunningTime="2025-12-11 08:45:33.487562855 +0000 UTC m=+1357.747036781" Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.472268 4992 generic.go:334] "Generic (PLEG): container finished" podID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerID="d512ba76b4c044986e3b2535ff3fd6c441b9465ed8a4d17cb434bbe48a3a4000" exitCode=0 Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.472520 4992 generic.go:334] "Generic (PLEG): container finished" podID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerID="9cfa2a2c36391661fa52f0a85cadb1e6ca027122c5d03f0d9072c6d8181c281e" exitCode=2 Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.472534 4992 generic.go:334] "Generic (PLEG): container finished" podID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerID="506b060d2917f370e4a1399c73862d6f6fcf36168d2718461db600e964dec046" exitCode=0 Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.472544 4992 generic.go:334] "Generic (PLEG): container finished" podID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerID="5cb6c5ad22b5d24283bd0c1af06202bc9275eb1b645655a8d717fb9f9b58f498" exitCode=0 Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.472358 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerDied","Data":"d512ba76b4c044986e3b2535ff3fd6c441b9465ed8a4d17cb434bbe48a3a4000"} Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.472578 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerDied","Data":"9cfa2a2c36391661fa52f0a85cadb1e6ca027122c5d03f0d9072c6d8181c281e"} Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.472620 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerDied","Data":"506b060d2917f370e4a1399c73862d6f6fcf36168d2718461db600e964dec046"} Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.472659 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerDied","Data":"5cb6c5ad22b5d24283bd0c1af06202bc9275eb1b645655a8d717fb9f9b58f498"} Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.624838 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7978c485bf-hpg7n" Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.683111 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66db4d95cb-74j4r"] Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.683362 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66db4d95cb-74j4r" podUID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerName="neutron-api" containerID="cri-o://a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32" gracePeriod=30 Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.683486 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66db4d95cb-74j4r" podUID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerName="neutron-httpd" containerID="cri-o://387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306" gracePeriod=30 Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.954442 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 08:45:34 crc kubenswrapper[4992]: I1211 08:45:34.954495 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.003285 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.040223 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.325173 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.378177 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.378237 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.483125 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.483974 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfc078c8-8507-4f89-a5dc-f91dd9877148","Type":"ContainerDied","Data":"701d6a77c11fb06668f6fd8a29d792a5885c4f5038e67b2266e49332cc3ff1f9"} Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.484010 4992 scope.go:117] "RemoveContainer" containerID="d512ba76b4c044986e3b2535ff3fd6c441b9465ed8a4d17cb434bbe48a3a4000" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.486862 4992 generic.go:334] "Generic (PLEG): container finished" podID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerID="387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306" exitCode=0 Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.487931 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66db4d95cb-74j4r" event={"ID":"9d96d34a-cfab-41e2-b77e-679b9a0a8a23","Type":"ContainerDied","Data":"387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306"} Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.487971 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.488075 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.503103 4992 scope.go:117] "RemoveContainer" containerID="9cfa2a2c36391661fa52f0a85cadb1e6ca027122c5d03f0d9072c6d8181c281e" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.510328 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-combined-ca-bundle\") pod \"bfc078c8-8507-4f89-a5dc-f91dd9877148\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.510405 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-scripts\") pod \"bfc078c8-8507-4f89-a5dc-f91dd9877148\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.510518 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-log-httpd\") pod \"bfc078c8-8507-4f89-a5dc-f91dd9877148\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.510548 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-sg-core-conf-yaml\") pod \"bfc078c8-8507-4f89-a5dc-f91dd9877148\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.510588 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9mc2\" (UniqueName: \"kubernetes.io/projected/bfc078c8-8507-4f89-a5dc-f91dd9877148-kube-api-access-v9mc2\") pod \"bfc078c8-8507-4f89-a5dc-f91dd9877148\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.510724 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-run-httpd\") pod \"bfc078c8-8507-4f89-a5dc-f91dd9877148\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.510772 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-config-data\") pod \"bfc078c8-8507-4f89-a5dc-f91dd9877148\" (UID: \"bfc078c8-8507-4f89-a5dc-f91dd9877148\") " Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.511061 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bfc078c8-8507-4f89-a5dc-f91dd9877148" (UID: "bfc078c8-8507-4f89-a5dc-f91dd9877148"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.511289 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bfc078c8-8507-4f89-a5dc-f91dd9877148" (UID: "bfc078c8-8507-4f89-a5dc-f91dd9877148"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.511518 4992 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.511542 4992 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfc078c8-8507-4f89-a5dc-f91dd9877148-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.516909 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc078c8-8507-4f89-a5dc-f91dd9877148-kube-api-access-v9mc2" (OuterVolumeSpecName: "kube-api-access-v9mc2") pod "bfc078c8-8507-4f89-a5dc-f91dd9877148" (UID: "bfc078c8-8507-4f89-a5dc-f91dd9877148"). InnerVolumeSpecName "kube-api-access-v9mc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.524018 4992 scope.go:117] "RemoveContainer" containerID="506b060d2917f370e4a1399c73862d6f6fcf36168d2718461db600e964dec046" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.535799 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-scripts" (OuterVolumeSpecName: "scripts") pod "bfc078c8-8507-4f89-a5dc-f91dd9877148" (UID: "bfc078c8-8507-4f89-a5dc-f91dd9877148"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.562787 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bfc078c8-8507-4f89-a5dc-f91dd9877148" (UID: "bfc078c8-8507-4f89-a5dc-f91dd9877148"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.612885 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-config-data" (OuterVolumeSpecName: "config-data") pod "bfc078c8-8507-4f89-a5dc-f91dd9877148" (UID: "bfc078c8-8507-4f89-a5dc-f91dd9877148"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.613673 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.613711 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.613724 4992 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.613738 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9mc2\" (UniqueName: \"kubernetes.io/projected/bfc078c8-8507-4f89-a5dc-f91dd9877148-kube-api-access-v9mc2\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.631068 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfc078c8-8507-4f89-a5dc-f91dd9877148" (UID: "bfc078c8-8507-4f89-a5dc-f91dd9877148"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.667980 4992 scope.go:117] "RemoveContainer" containerID="5cb6c5ad22b5d24283bd0c1af06202bc9275eb1b645655a8d717fb9f9b58f498" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.715365 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc078c8-8507-4f89-a5dc-f91dd9877148-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.862818 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.880240 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.900677 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:35 crc kubenswrapper[4992]: E1211 08:45:35.901420 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="sg-core" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.901533 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="sg-core" Dec 11 08:45:35 crc kubenswrapper[4992]: E1211 08:45:35.901647 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="proxy-httpd" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.901750 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="proxy-httpd" Dec 11 08:45:35 crc kubenswrapper[4992]: E1211 08:45:35.901847 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="ceilometer-notification-agent" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.901917 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="ceilometer-notification-agent" Dec 11 08:45:35 crc kubenswrapper[4992]: E1211 08:45:35.902005 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="ceilometer-central-agent" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.902071 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="ceilometer-central-agent" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.902356 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="ceilometer-notification-agent" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.902454 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="proxy-httpd" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.902534 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="ceilometer-central-agent" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.902606 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" containerName="sg-core" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.904941 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.908860 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.909125 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 08:45:35 crc kubenswrapper[4992]: I1211 08:45:35.911846 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.020253 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-run-httpd\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.020317 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ghx8\" (UniqueName: \"kubernetes.io/projected/e34db3cf-2dbe-408f-b068-814f9ff69c97-kube-api-access-9ghx8\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.020371 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-scripts\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.020414 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-config-data\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.020488 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.020542 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.020627 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-log-httpd\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.108264 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfc078c8-8507-4f89-a5dc-f91dd9877148" path="/var/lib/kubelet/pods/bfc078c8-8507-4f89-a5dc-f91dd9877148/volumes" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.122858 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-run-httpd\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.122933 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ghx8\" (UniqueName: \"kubernetes.io/projected/e34db3cf-2dbe-408f-b068-814f9ff69c97-kube-api-access-9ghx8\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.122975 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-scripts\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.123016 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-config-data\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.123062 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.123201 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.123237 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-log-httpd\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.123563 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-run-httpd\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.123664 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-log-httpd\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.130272 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-scripts\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.130666 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.132561 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.138871 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-config-data\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.150347 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ghx8\" (UniqueName: \"kubernetes.io/projected/e34db3cf-2dbe-408f-b068-814f9ff69c97-kube-api-access-9ghx8\") pod \"ceilometer-0\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.258456 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.499682 4992 generic.go:334] "Generic (PLEG): container finished" podID="8ead296a-d746-4d8b-a8c5-b51c08bf2422" containerID="161173e658d40330fe5968c18371acedb040a57ecb50a9236c73daca5cf866ab" exitCode=0 Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.499777 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n56r8" event={"ID":"8ead296a-d746-4d8b-a8c5-b51c08bf2422","Type":"ContainerDied","Data":"161173e658d40330fe5968c18371acedb040a57ecb50a9236c73daca5cf866ab"} Dec 11 08:45:36 crc kubenswrapper[4992]: W1211 08:45:36.710566 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode34db3cf_2dbe_408f_b068_814f9ff69c97.slice/crio-dc26bab30aaaf2ddab5a137b481d72af03a2c06d79a7a216cbbec4e577738092 WatchSource:0}: Error finding container dc26bab30aaaf2ddab5a137b481d72af03a2c06d79a7a216cbbec4e577738092: Status 404 returned error can't find the container with id dc26bab30aaaf2ddab5a137b481d72af03a2c06d79a7a216cbbec4e577738092 Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.714483 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.777447 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.777501 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.819070 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.819908 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:36 crc kubenswrapper[4992]: I1211 08:45:36.956322 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.510718 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.511112 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.510863 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerStarted","Data":"dc26bab30aaaf2ddab5a137b481d72af03a2c06d79a7a216cbbec4e577738092"} Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.511428 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.511460 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.566296 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.715045 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.895922 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.959422 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-config-data\") pod \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.959534 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-scripts\") pod \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.959662 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcqcg\" (UniqueName: \"kubernetes.io/projected/8ead296a-d746-4d8b-a8c5-b51c08bf2422-kube-api-access-rcqcg\") pod \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.959736 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-combined-ca-bundle\") pod \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\" (UID: \"8ead296a-d746-4d8b-a8c5-b51c08bf2422\") " Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.966253 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-scripts" (OuterVolumeSpecName: "scripts") pod "8ead296a-d746-4d8b-a8c5-b51c08bf2422" (UID: "8ead296a-d746-4d8b-a8c5-b51c08bf2422"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.966484 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ead296a-d746-4d8b-a8c5-b51c08bf2422-kube-api-access-rcqcg" (OuterVolumeSpecName: "kube-api-access-rcqcg") pod "8ead296a-d746-4d8b-a8c5-b51c08bf2422" (UID: "8ead296a-d746-4d8b-a8c5-b51c08bf2422"). InnerVolumeSpecName "kube-api-access-rcqcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.990708 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-config-data" (OuterVolumeSpecName: "config-data") pod "8ead296a-d746-4d8b-a8c5-b51c08bf2422" (UID: "8ead296a-d746-4d8b-a8c5-b51c08bf2422"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:37 crc kubenswrapper[4992]: I1211 08:45:37.992734 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ead296a-d746-4d8b-a8c5-b51c08bf2422" (UID: "8ead296a-d746-4d8b-a8c5-b51c08bf2422"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.062009 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.062235 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcqcg\" (UniqueName: \"kubernetes.io/projected/8ead296a-d746-4d8b-a8c5-b51c08bf2422-kube-api-access-rcqcg\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.062350 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.062437 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ead296a-d746-4d8b-a8c5-b51c08bf2422-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.518803 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n56r8" event={"ID":"8ead296a-d746-4d8b-a8c5-b51c08bf2422","Type":"ContainerDied","Data":"96e14128f30d2e6d3606d225ccc37dd0bb3193ed54aec8f741f49675e844f059"} Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.519785 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96e14128f30d2e6d3606d225ccc37dd0bb3193ed54aec8f741f49675e844f059" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.519913 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n56r8" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.524591 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerStarted","Data":"e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb"} Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.524666 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerStarted","Data":"c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff"} Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.635611 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 08:45:38 crc kubenswrapper[4992]: E1211 08:45:38.636778 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ead296a-d746-4d8b-a8c5-b51c08bf2422" containerName="nova-cell0-conductor-db-sync" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.636861 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ead296a-d746-4d8b-a8c5-b51c08bf2422" containerName="nova-cell0-conductor-db-sync" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.637173 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ead296a-d746-4d8b-a8c5-b51c08bf2422" containerName="nova-cell0-conductor-db-sync" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.637858 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.643665 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-xzp6c" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.644018 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.652789 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.774469 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb13d3b-6f5a-432f-a32f-80fbf81c6adf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf\") " pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.774574 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhnc7\" (UniqueName: \"kubernetes.io/projected/dcb13d3b-6f5a-432f-a32f-80fbf81c6adf-kube-api-access-mhnc7\") pod \"nova-cell0-conductor-0\" (UID: \"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf\") " pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.774665 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb13d3b-6f5a-432f-a32f-80fbf81c6adf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf\") " pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.875921 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhnc7\" (UniqueName: \"kubernetes.io/projected/dcb13d3b-6f5a-432f-a32f-80fbf81c6adf-kube-api-access-mhnc7\") pod \"nova-cell0-conductor-0\" (UID: \"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf\") " pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.876056 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb13d3b-6f5a-432f-a32f-80fbf81c6adf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf\") " pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.876121 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb13d3b-6f5a-432f-a32f-80fbf81c6adf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf\") " pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.881390 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb13d3b-6f5a-432f-a32f-80fbf81c6adf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf\") " pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.881443 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb13d3b-6f5a-432f-a32f-80fbf81c6adf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf\") " pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.897106 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhnc7\" (UniqueName: \"kubernetes.io/projected/dcb13d3b-6f5a-432f-a32f-80fbf81c6adf-kube-api-access-mhnc7\") pod \"nova-cell0-conductor-0\" (UID: \"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf\") " pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:38 crc kubenswrapper[4992]: I1211 08:45:38.969563 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.332893 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.406921 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-config\") pod \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.407001 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-httpd-config\") pod \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.407084 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-combined-ca-bundle\") pod \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.407114 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-ovndb-tls-certs\") pod \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.407195 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69zpd\" (UniqueName: \"kubernetes.io/projected/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-kube-api-access-69zpd\") pod \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\" (UID: \"9d96d34a-cfab-41e2-b77e-679b9a0a8a23\") " Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.411001 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9d96d34a-cfab-41e2-b77e-679b9a0a8a23" (UID: "9d96d34a-cfab-41e2-b77e-679b9a0a8a23"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.414313 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-kube-api-access-69zpd" (OuterVolumeSpecName: "kube-api-access-69zpd") pod "9d96d34a-cfab-41e2-b77e-679b9a0a8a23" (UID: "9d96d34a-cfab-41e2-b77e-679b9a0a8a23"). InnerVolumeSpecName "kube-api-access-69zpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.463782 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d96d34a-cfab-41e2-b77e-679b9a0a8a23" (UID: "9d96d34a-cfab-41e2-b77e-679b9a0a8a23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.467341 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-config" (OuterVolumeSpecName: "config") pod "9d96d34a-cfab-41e2-b77e-679b9a0a8a23" (UID: "9d96d34a-cfab-41e2-b77e-679b9a0a8a23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.495953 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9d96d34a-cfab-41e2-b77e-679b9a0a8a23" (UID: "9d96d34a-cfab-41e2-b77e-679b9a0a8a23"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.512392 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.512530 4992 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.512547 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.512561 4992 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.512573 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69zpd\" (UniqueName: \"kubernetes.io/projected/9d96d34a-cfab-41e2-b77e-679b9a0a8a23-kube-api-access-69zpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.524593 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.540701 4992 generic.go:334] "Generic (PLEG): container finished" podID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerID="a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32" exitCode=0 Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.540758 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66db4d95cb-74j4r" event={"ID":"9d96d34a-cfab-41e2-b77e-679b9a0a8a23","Type":"ContainerDied","Data":"a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32"} Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.540787 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66db4d95cb-74j4r" event={"ID":"9d96d34a-cfab-41e2-b77e-679b9a0a8a23","Type":"ContainerDied","Data":"c04db233213d565380b05aa93120b93a50f3ba6717150c07137fc480e0994e91"} Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.540802 4992 scope.go:117] "RemoveContainer" containerID="387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.540923 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66db4d95cb-74j4r" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.554156 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerStarted","Data":"ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca"} Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.580076 4992 scope.go:117] "RemoveContainer" containerID="a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.583234 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66db4d95cb-74j4r"] Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.594977 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66db4d95cb-74j4r"] Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.616893 4992 scope.go:117] "RemoveContainer" containerID="387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306" Dec 11 08:45:39 crc kubenswrapper[4992]: E1211 08:45:39.617719 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306\": container with ID starting with 387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306 not found: ID does not exist" containerID="387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.617770 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306"} err="failed to get container status \"387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306\": rpc error: code = NotFound desc = could not find container \"387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306\": container with ID starting with 387dc55afa7c361a686a5d368972769b83fcf514183acef3a05e9116a7d4a306 not found: ID does not exist" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.617804 4992 scope.go:117] "RemoveContainer" containerID="a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32" Dec 11 08:45:39 crc kubenswrapper[4992]: E1211 08:45:39.618141 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32\": container with ID starting with a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32 not found: ID does not exist" containerID="a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.618176 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32"} err="failed to get container status \"a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32\": rpc error: code = NotFound desc = could not find container \"a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32\": container with ID starting with a4ae8d9b3950a03091547c15b3504e712fcc50ec132144ddd8f513f74ba36a32 not found: ID does not exist" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.640219 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.640333 4992 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 08:45:39 crc kubenswrapper[4992]: I1211 08:45:39.683954 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 08:45:40 crc kubenswrapper[4992]: I1211 08:45:40.107580 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" path="/var/lib/kubelet/pods/9d96d34a-cfab-41e2-b77e-679b9a0a8a23/volumes" Dec 11 08:45:40 crc kubenswrapper[4992]: I1211 08:45:40.563849 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf","Type":"ContainerStarted","Data":"d0f6e56170c4427a897de286254daa1152592707845b3f52f132eb132f8b9d31"} Dec 11 08:45:40 crc kubenswrapper[4992]: I1211 08:45:40.564253 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dcb13d3b-6f5a-432f-a32f-80fbf81c6adf","Type":"ContainerStarted","Data":"615446d29da1c67ca385e6a2b62b44fbe139beee47c58ac86528ff13fcc09731"} Dec 11 08:45:40 crc kubenswrapper[4992]: I1211 08:45:40.580863 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.580846615 podStartE2EDuration="2.580846615s" podCreationTimestamp="2025-12-11 08:45:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:40.577808271 +0000 UTC m=+1364.837282217" watchObservedRunningTime="2025-12-11 08:45:40.580846615 +0000 UTC m=+1364.840320531" Dec 11 08:45:41 crc kubenswrapper[4992]: I1211 08:45:41.631902 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerStarted","Data":"2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f"} Dec 11 08:45:41 crc kubenswrapper[4992]: I1211 08:45:41.632117 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="ceilometer-central-agent" containerID="cri-o://c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff" gracePeriod=30 Dec 11 08:45:41 crc kubenswrapper[4992]: I1211 08:45:41.632144 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="proxy-httpd" containerID="cri-o://2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f" gracePeriod=30 Dec 11 08:45:41 crc kubenswrapper[4992]: I1211 08:45:41.632160 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="sg-core" containerID="cri-o://ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca" gracePeriod=30 Dec 11 08:45:41 crc kubenswrapper[4992]: I1211 08:45:41.632174 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="ceilometer-notification-agent" containerID="cri-o://e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb" gracePeriod=30 Dec 11 08:45:41 crc kubenswrapper[4992]: I1211 08:45:41.632511 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:41 crc kubenswrapper[4992]: I1211 08:45:41.662366 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.439373625 podStartE2EDuration="6.662342055s" podCreationTimestamp="2025-12-11 08:45:35 +0000 UTC" firstStartedPulling="2025-12-11 08:45:36.714063333 +0000 UTC m=+1360.973537259" lastFinishedPulling="2025-12-11 08:45:40.937031763 +0000 UTC m=+1365.196505689" observedRunningTime="2025-12-11 08:45:41.654551246 +0000 UTC m=+1365.914025212" watchObservedRunningTime="2025-12-11 08:45:41.662342055 +0000 UTC m=+1365.921815991" Dec 11 08:45:42 crc kubenswrapper[4992]: I1211 08:45:42.657374 4992 generic.go:334] "Generic (PLEG): container finished" podID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerID="2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f" exitCode=0 Dec 11 08:45:42 crc kubenswrapper[4992]: I1211 08:45:42.658158 4992 generic.go:334] "Generic (PLEG): container finished" podID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerID="ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca" exitCode=2 Dec 11 08:45:42 crc kubenswrapper[4992]: I1211 08:45:42.658248 4992 generic.go:334] "Generic (PLEG): container finished" podID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerID="e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb" exitCode=0 Dec 11 08:45:42 crc kubenswrapper[4992]: I1211 08:45:42.657435 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerDied","Data":"2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f"} Dec 11 08:45:42 crc kubenswrapper[4992]: I1211 08:45:42.658337 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerDied","Data":"ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca"} Dec 11 08:45:42 crc kubenswrapper[4992]: I1211 08:45:42.658349 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerDied","Data":"e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb"} Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.183400 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.368845 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-sg-core-conf-yaml\") pod \"e34db3cf-2dbe-408f-b068-814f9ff69c97\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.368950 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-scripts\") pod \"e34db3cf-2dbe-408f-b068-814f9ff69c97\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.369032 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-log-httpd\") pod \"e34db3cf-2dbe-408f-b068-814f9ff69c97\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.369083 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-run-httpd\") pod \"e34db3cf-2dbe-408f-b068-814f9ff69c97\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.369170 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-combined-ca-bundle\") pod \"e34db3cf-2dbe-408f-b068-814f9ff69c97\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.369235 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ghx8\" (UniqueName: \"kubernetes.io/projected/e34db3cf-2dbe-408f-b068-814f9ff69c97-kube-api-access-9ghx8\") pod \"e34db3cf-2dbe-408f-b068-814f9ff69c97\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.369255 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-config-data\") pod \"e34db3cf-2dbe-408f-b068-814f9ff69c97\" (UID: \"e34db3cf-2dbe-408f-b068-814f9ff69c97\") " Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.369668 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e34db3cf-2dbe-408f-b068-814f9ff69c97" (UID: "e34db3cf-2dbe-408f-b068-814f9ff69c97"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.369793 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e34db3cf-2dbe-408f-b068-814f9ff69c97" (UID: "e34db3cf-2dbe-408f-b068-814f9ff69c97"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.370422 4992 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.370441 4992 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e34db3cf-2dbe-408f-b068-814f9ff69c97-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.374600 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34db3cf-2dbe-408f-b068-814f9ff69c97-kube-api-access-9ghx8" (OuterVolumeSpecName: "kube-api-access-9ghx8") pod "e34db3cf-2dbe-408f-b068-814f9ff69c97" (UID: "e34db3cf-2dbe-408f-b068-814f9ff69c97"). InnerVolumeSpecName "kube-api-access-9ghx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.376148 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-scripts" (OuterVolumeSpecName: "scripts") pod "e34db3cf-2dbe-408f-b068-814f9ff69c97" (UID: "e34db3cf-2dbe-408f-b068-814f9ff69c97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.416849 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e34db3cf-2dbe-408f-b068-814f9ff69c97" (UID: "e34db3cf-2dbe-408f-b068-814f9ff69c97"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.434430 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e34db3cf-2dbe-408f-b068-814f9ff69c97" (UID: "e34db3cf-2dbe-408f-b068-814f9ff69c97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.472110 4992 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.472143 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.472153 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.472162 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ghx8\" (UniqueName: \"kubernetes.io/projected/e34db3cf-2dbe-408f-b068-814f9ff69c97-kube-api-access-9ghx8\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.477031 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-config-data" (OuterVolumeSpecName: "config-data") pod "e34db3cf-2dbe-408f-b068-814f9ff69c97" (UID: "e34db3cf-2dbe-408f-b068-814f9ff69c97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.573744 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34db3cf-2dbe-408f-b068-814f9ff69c97-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.726124 4992 generic.go:334] "Generic (PLEG): container finished" podID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerID="c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff" exitCode=0 Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.726171 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.726166 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerDied","Data":"c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff"} Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.726772 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e34db3cf-2dbe-408f-b068-814f9ff69c97","Type":"ContainerDied","Data":"dc26bab30aaaf2ddab5a137b481d72af03a2c06d79a7a216cbbec4e577738092"} Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.726802 4992 scope.go:117] "RemoveContainer" containerID="2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.746423 4992 scope.go:117] "RemoveContainer" containerID="ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.766987 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.781899 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.794834 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:48 crc kubenswrapper[4992]: E1211 08:45:48.795193 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="ceilometer-central-agent" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795209 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="ceilometer-central-agent" Dec 11 08:45:48 crc kubenswrapper[4992]: E1211 08:45:48.795223 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerName="neutron-httpd" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795229 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerName="neutron-httpd" Dec 11 08:45:48 crc kubenswrapper[4992]: E1211 08:45:48.795239 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="proxy-httpd" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795245 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="proxy-httpd" Dec 11 08:45:48 crc kubenswrapper[4992]: E1211 08:45:48.795267 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerName="neutron-api" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795273 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerName="neutron-api" Dec 11 08:45:48 crc kubenswrapper[4992]: E1211 08:45:48.795295 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="sg-core" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795300 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="sg-core" Dec 11 08:45:48 crc kubenswrapper[4992]: E1211 08:45:48.795311 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="ceilometer-notification-agent" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795316 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="ceilometer-notification-agent" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795465 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="proxy-httpd" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795485 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="sg-core" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795493 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerName="neutron-api" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795503 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="ceilometer-central-agent" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795516 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d96d34a-cfab-41e2-b77e-679b9a0a8a23" containerName="neutron-httpd" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.795526 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" containerName="ceilometer-notification-agent" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.797057 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.814280 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.815154 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.815325 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.994075 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.994146 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7b8l\" (UniqueName: \"kubernetes.io/projected/33e1f655-4ae5-4bc7-8829-501d73b23615-kube-api-access-t7b8l\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.994205 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-log-httpd\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.994249 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-config-data\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.994293 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-run-httpd\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.994349 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-scripts\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:48 crc kubenswrapper[4992]: I1211 08:45:48.994475 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.006824 4992 scope.go:117] "RemoveContainer" containerID="e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.012821 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.030858 4992 scope.go:117] "RemoveContainer" containerID="c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.064476 4992 scope.go:117] "RemoveContainer" containerID="2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f" Dec 11 08:45:49 crc kubenswrapper[4992]: E1211 08:45:49.065002 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f\": container with ID starting with 2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f not found: ID does not exist" containerID="2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.065035 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f"} err="failed to get container status \"2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f\": rpc error: code = NotFound desc = could not find container \"2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f\": container with ID starting with 2c725cc13f0bf0dd3ad20a6a2964780522d68debc63d746014dd5d5f3dd1f43f not found: ID does not exist" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.065055 4992 scope.go:117] "RemoveContainer" containerID="ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca" Dec 11 08:45:49 crc kubenswrapper[4992]: E1211 08:45:49.065371 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca\": container with ID starting with ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca not found: ID does not exist" containerID="ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.065418 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca"} err="failed to get container status \"ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca\": rpc error: code = NotFound desc = could not find container \"ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca\": container with ID starting with ce0f9e75f4706540bf5f4f1f3b1e74de5ded8a2d23106edde070708cb6b07aca not found: ID does not exist" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.065453 4992 scope.go:117] "RemoveContainer" containerID="e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb" Dec 11 08:45:49 crc kubenswrapper[4992]: E1211 08:45:49.065797 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb\": container with ID starting with e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb not found: ID does not exist" containerID="e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.065827 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb"} err="failed to get container status \"e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb\": rpc error: code = NotFound desc = could not find container \"e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb\": container with ID starting with e3c9e323049b6b75f610690e304f672a7a5b4ac70fbeeca1a63dc1a9da1774fb not found: ID does not exist" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.065845 4992 scope.go:117] "RemoveContainer" containerID="c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff" Dec 11 08:45:49 crc kubenswrapper[4992]: E1211 08:45:49.066060 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff\": container with ID starting with c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff not found: ID does not exist" containerID="c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.066090 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff"} err="failed to get container status \"c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff\": rpc error: code = NotFound desc = could not find container \"c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff\": container with ID starting with c05a608dd927ec0a46cb995cbc8a6f654c540de6d3e9aafdbf1abc8a211ec4ff not found: ID does not exist" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.095842 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-run-httpd\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.095908 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-scripts\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.095969 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.096046 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.096072 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7b8l\" (UniqueName: \"kubernetes.io/projected/33e1f655-4ae5-4bc7-8829-501d73b23615-kube-api-access-t7b8l\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.096116 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-log-httpd\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.096151 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-config-data\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.096897 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-log-httpd\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.097390 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-run-httpd\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.101527 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.102307 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-scripts\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.102396 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-config-data\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.108081 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.121625 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7b8l\" (UniqueName: \"kubernetes.io/projected/33e1f655-4ae5-4bc7-8829-501d73b23615-kube-api-access-t7b8l\") pod \"ceilometer-0\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.310687 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.484130 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-dl5hv"] Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.488268 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.499960 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.500179 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.503314 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-config-data\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.503374 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.503401 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4srk\" (UniqueName: \"kubernetes.io/projected/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-kube-api-access-b4srk\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.503430 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-scripts\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.505236 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dl5hv"] Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.605139 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-config-data\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.618064 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.618114 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4srk\" (UniqueName: \"kubernetes.io/projected/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-kube-api-access-b4srk\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.618178 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-scripts\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.630275 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-scripts\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.630388 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.631697 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-config-data\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.659606 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4srk\" (UniqueName: \"kubernetes.io/projected/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-kube-api-access-b4srk\") pod \"nova-cell0-cell-mapping-dl5hv\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.677374 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.678872 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.681832 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.703250 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.731741 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.733813 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp9pj\" (UniqueName: \"kubernetes.io/projected/03693c6c-5abe-4912-a367-85f9913399db-kube-api-access-bp9pj\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.733870 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-config-data\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.733981 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.734011 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03693c6c-5abe-4912-a367-85f9913399db-logs\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.734024 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.741043 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.826208 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.836044 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp9pj\" (UniqueName: \"kubernetes.io/projected/03693c6c-5abe-4912-a367-85f9913399db-kube-api-access-bp9pj\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.836097 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-config-data\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.836123 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42k47\" (UniqueName: \"kubernetes.io/projected/83ec9f41-2ca5-4d1e-b876-f986326499f8-kube-api-access-42k47\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.836147 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.836245 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.836279 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.836302 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03693c6c-5abe-4912-a367-85f9913399db-logs\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.836735 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03693c6c-5abe-4912-a367-85f9913399db-logs\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.848323 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.851989 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-config-data\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.858163 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.870433 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp9pj\" (UniqueName: \"kubernetes.io/projected/03693c6c-5abe-4912-a367-85f9913399db-kube-api-access-bp9pj\") pod \"nova-metadata-0\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " pod="openstack/nova-metadata-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.873828 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.875156 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.878958 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.936268 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.939449 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42k47\" (UniqueName: \"kubernetes.io/projected/83ec9f41-2ca5-4d1e-b876-f986326499f8-kube-api-access-42k47\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.939498 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.939595 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rntl6\" (UniqueName: \"kubernetes.io/projected/2fb63d41-453e-4a48-94f6-5db048e7975f-kube-api-access-rntl6\") pod \"nova-scheduler-0\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " pod="openstack/nova-scheduler-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.939683 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " pod="openstack/nova-scheduler-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.939726 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.939752 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-config-data\") pod \"nova-scheduler-0\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " pod="openstack/nova-scheduler-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.948079 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.949777 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:49 crc kubenswrapper[4992]: I1211 08:45:49.968276 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42k47\" (UniqueName: \"kubernetes.io/projected/83ec9f41-2ca5-4d1e-b876-f986326499f8-kube-api-access-42k47\") pod \"nova-cell1-novncproxy-0\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.003568 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-5bg7x"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.009457 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.016984 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.022778 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-5bg7x"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.037980 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.042178 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.044553 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " pod="openstack/nova-scheduler-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.044889 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7hd6\" (UniqueName: \"kubernetes.io/projected/1762eaac-ace3-46ce-996f-619ab0c4bdae-kube-api-access-c7hd6\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.045065 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-config-data\") pod \"nova-scheduler-0\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " pod="openstack/nova-scheduler-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.045191 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-config\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.045339 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.045465 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.045743 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.045894 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.046011 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rntl6\" (UniqueName: \"kubernetes.io/projected/2fb63d41-453e-4a48-94f6-5db048e7975f-kube-api-access-rntl6\") pod \"nova-scheduler-0\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " pod="openstack/nova-scheduler-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.048220 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.058356 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " pod="openstack/nova-scheduler-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.066305 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-config-data\") pod \"nova-scheduler-0\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " pod="openstack/nova-scheduler-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.067093 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.073014 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rntl6\" (UniqueName: \"kubernetes.io/projected/2fb63d41-453e-4a48-94f6-5db048e7975f-kube-api-access-rntl6\") pod \"nova-scheduler-0\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " pod="openstack/nova-scheduler-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.075549 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.115976 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34db3cf-2dbe-408f-b068-814f9ff69c97" path="/var/lib/kubelet/pods/e34db3cf-2dbe-408f-b068-814f9ff69c97/volumes" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155247 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k87c\" (UniqueName: \"kubernetes.io/projected/84f93850-ff06-42e9-b1e3-07f0800312d9-kube-api-access-4k87c\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155439 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155550 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155671 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155739 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7hd6\" (UniqueName: \"kubernetes.io/projected/1762eaac-ace3-46ce-996f-619ab0c4bdae-kube-api-access-c7hd6\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155779 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-config-data\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155810 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-config\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155894 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155930 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84f93850-ff06-42e9-b1e3-07f0800312d9-logs\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.155972 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.157025 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.158466 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.158622 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.160598 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.162219 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-config\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.178122 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7hd6\" (UniqueName: \"kubernetes.io/projected/1762eaac-ace3-46ce-996f-619ab0c4bdae-kube-api-access-c7hd6\") pod \"dnsmasq-dns-845d6d6f59-5bg7x\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.179880 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.258147 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.258240 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-config-data\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.258312 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84f93850-ff06-42e9-b1e3-07f0800312d9-logs\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.258362 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k87c\" (UniqueName: \"kubernetes.io/projected/84f93850-ff06-42e9-b1e3-07f0800312d9-kube-api-access-4k87c\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.263618 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-config-data\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.264366 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84f93850-ff06-42e9-b1e3-07f0800312d9-logs\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.264541 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.286604 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k87c\" (UniqueName: \"kubernetes.io/projected/84f93850-ff06-42e9-b1e3-07f0800312d9-kube-api-access-4k87c\") pod \"nova-api-0\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.327394 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.346282 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.467129 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.645767 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.718182 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.744385 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9mjph"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.745854 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.751939 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.751987 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.759443 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9mjph"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.776466 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-config-data\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.776592 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nq6\" (UniqueName: \"kubernetes.io/projected/765acb41-3b1b-4ff0-a57e-9334876b8750-kube-api-access-p2nq6\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.776669 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.776689 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-scripts\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.785029 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerStarted","Data":"cac6fb8a721e75856c70d58ff4907acdc81bbafa2186a3d520bd2f8e995e30a7"} Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.786984 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83ec9f41-2ca5-4d1e-b876-f986326499f8","Type":"ContainerStarted","Data":"80004c6bd027ecef2a01d95b76c09a65a391c7d6399ab6ca6a55f6435c81a098"} Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.788323 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03693c6c-5abe-4912-a367-85f9913399db","Type":"ContainerStarted","Data":"1d914af6f969eed0247c103de6c733028cf137a2743310c9123814ae049ada0c"} Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.878557 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-config-data\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.878687 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2nq6\" (UniqueName: \"kubernetes.io/projected/765acb41-3b1b-4ff0-a57e-9334876b8750-kube-api-access-p2nq6\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.878742 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-scripts\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.878760 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.885463 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-scripts\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.888655 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.889414 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-config-data\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.900022 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2nq6\" (UniqueName: \"kubernetes.io/projected/765acb41-3b1b-4ff0-a57e-9334876b8750-kube-api-access-p2nq6\") pod \"nova-cell1-conductor-db-sync-9mjph\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.908464 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-5bg7x"] Dec 11 08:45:50 crc kubenswrapper[4992]: I1211 08:45:50.955579 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dl5hv"] Dec 11 08:45:50 crc kubenswrapper[4992]: W1211 08:45:50.956880 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5d5ca90_15ec_4992_b942_c8d63cd82ea6.slice/crio-56cd1791b84d69714466c89a6dac706efce1cd5a88b92bb616be72386698cb24 WatchSource:0}: Error finding container 56cd1791b84d69714466c89a6dac706efce1cd5a88b92bb616be72386698cb24: Status 404 returned error can't find the container with id 56cd1791b84d69714466c89a6dac706efce1cd5a88b92bb616be72386698cb24 Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.028538 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.069286 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:45:51 crc kubenswrapper[4992]: W1211 08:45:51.072365 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84f93850_ff06_42e9_b1e3_07f0800312d9.slice/crio-32efa4cedc09b4b4357c53e029b3299586ff7b21b657043b9a5ffd0e8c293177 WatchSource:0}: Error finding container 32efa4cedc09b4b4357c53e029b3299586ff7b21b657043b9a5ffd0e8c293177: Status 404 returned error can't find the container with id 32efa4cedc09b4b4357c53e029b3299586ff7b21b657043b9a5ffd0e8c293177 Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.086395 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.609701 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9mjph"] Dec 11 08:45:51 crc kubenswrapper[4992]: W1211 08:45:51.613896 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod765acb41_3b1b_4ff0_a57e_9334876b8750.slice/crio-cd64077d94392d6fc2f6496f50a6628d27be371930b0bc6ab5b7bcf1d1e1ff64 WatchSource:0}: Error finding container cd64077d94392d6fc2f6496f50a6628d27be371930b0bc6ab5b7bcf1d1e1ff64: Status 404 returned error can't find the container with id cd64077d94392d6fc2f6496f50a6628d27be371930b0bc6ab5b7bcf1d1e1ff64 Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.816711 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9mjph" event={"ID":"765acb41-3b1b-4ff0-a57e-9334876b8750","Type":"ContainerStarted","Data":"27ca317d24e17faf4d99bfb2df1f2645a6a5e4a67c43cd8450cc3b61f210c631"} Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.817065 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9mjph" event={"ID":"765acb41-3b1b-4ff0-a57e-9334876b8750","Type":"ContainerStarted","Data":"cd64077d94392d6fc2f6496f50a6628d27be371930b0bc6ab5b7bcf1d1e1ff64"} Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.818078 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dl5hv" event={"ID":"e5d5ca90-15ec-4992-b942-c8d63cd82ea6","Type":"ContainerStarted","Data":"1b14cbf457af962f9c8b2c30b2b84e577e1b6977f4b88290314fd9d29c4638aa"} Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.818131 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dl5hv" event={"ID":"e5d5ca90-15ec-4992-b942-c8d63cd82ea6","Type":"ContainerStarted","Data":"56cd1791b84d69714466c89a6dac706efce1cd5a88b92bb616be72386698cb24"} Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.819287 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84f93850-ff06-42e9-b1e3-07f0800312d9","Type":"ContainerStarted","Data":"32efa4cedc09b4b4357c53e029b3299586ff7b21b657043b9a5ffd0e8c293177"} Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.820709 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb63d41-453e-4a48-94f6-5db048e7975f","Type":"ContainerStarted","Data":"f1c58bfd36645732144951eee8834ad99258ea93a113dddabda8e8f40b030dca"} Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.822548 4992 generic.go:334] "Generic (PLEG): container finished" podID="1762eaac-ace3-46ce-996f-619ab0c4bdae" containerID="1d80ba546d99677c414f332750877d6c6d368e8737e0bb6278b3c9fbe33fe4e1" exitCode=0 Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.822647 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" event={"ID":"1762eaac-ace3-46ce-996f-619ab0c4bdae","Type":"ContainerDied","Data":"1d80ba546d99677c414f332750877d6c6d368e8737e0bb6278b3c9fbe33fe4e1"} Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.822670 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" event={"ID":"1762eaac-ace3-46ce-996f-619ab0c4bdae","Type":"ContainerStarted","Data":"32e977ab30f09e9c374ad129b01169c42803e53398214ff620a2c1bbc5331388"} Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.836125 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerStarted","Data":"3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b"} Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.847831 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9mjph" podStartSLOduration=1.847811792 podStartE2EDuration="1.847811792s" podCreationTimestamp="2025-12-11 08:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:51.835458672 +0000 UTC m=+1376.094932598" watchObservedRunningTime="2025-12-11 08:45:51.847811792 +0000 UTC m=+1376.107285718" Dec 11 08:45:51 crc kubenswrapper[4992]: I1211 08:45:51.891007 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-dl5hv" podStartSLOduration=2.890983661 podStartE2EDuration="2.890983661s" podCreationTimestamp="2025-12-11 08:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:51.878941138 +0000 UTC m=+1376.138415104" watchObservedRunningTime="2025-12-11 08:45:51.890983661 +0000 UTC m=+1376.150457597" Dec 11 08:45:52 crc kubenswrapper[4992]: I1211 08:45:52.848768 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" event={"ID":"1762eaac-ace3-46ce-996f-619ab0c4bdae","Type":"ContainerStarted","Data":"8edb39bda4d87d3a2b6872bd08e9fbe8708057efe489ec90401b1daba8f4cadf"} Dec 11 08:45:52 crc kubenswrapper[4992]: I1211 08:45:52.878364 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" podStartSLOduration=3.8783473649999998 podStartE2EDuration="3.878347365s" podCreationTimestamp="2025-12-11 08:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:45:52.875355383 +0000 UTC m=+1377.134829319" watchObservedRunningTime="2025-12-11 08:45:52.878347365 +0000 UTC m=+1377.137821291" Dec 11 08:45:53 crc kubenswrapper[4992]: I1211 08:45:53.339023 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 08:45:53 crc kubenswrapper[4992]: I1211 08:45:53.416988 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:45:53 crc kubenswrapper[4992]: I1211 08:45:53.857875 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:45:54 crc kubenswrapper[4992]: I1211 08:45:54.904999 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03693c6c-5abe-4912-a367-85f9913399db","Type":"ContainerStarted","Data":"fb5cc07d3544674972fd36edcf1e0b877dd67b53335732909f5de3173a102cc6"} Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.915789 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03693c6c-5abe-4912-a367-85f9913399db","Type":"ContainerStarted","Data":"06e41b92bab7ec56c330be8407c64e6b429869492882633361da98960df722ec"} Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.916065 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03693c6c-5abe-4912-a367-85f9913399db" containerName="nova-metadata-log" containerID="cri-o://fb5cc07d3544674972fd36edcf1e0b877dd67b53335732909f5de3173a102cc6" gracePeriod=30 Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.916394 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="03693c6c-5abe-4912-a367-85f9913399db" containerName="nova-metadata-metadata" containerID="cri-o://06e41b92bab7ec56c330be8407c64e6b429869492882633361da98960df722ec" gracePeriod=30 Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.934591 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84f93850-ff06-42e9-b1e3-07f0800312d9","Type":"ContainerStarted","Data":"c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69"} Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.934672 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84f93850-ff06-42e9-b1e3-07f0800312d9","Type":"ContainerStarted","Data":"3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5"} Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.936542 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.12286136 podStartE2EDuration="6.936513012s" podCreationTimestamp="2025-12-11 08:45:49 +0000 UTC" firstStartedPulling="2025-12-11 08:45:50.659765784 +0000 UTC m=+1374.919239710" lastFinishedPulling="2025-12-11 08:45:54.473417436 +0000 UTC m=+1378.732891362" observedRunningTime="2025-12-11 08:45:55.93189695 +0000 UTC m=+1380.191370886" watchObservedRunningTime="2025-12-11 08:45:55.936513012 +0000 UTC m=+1380.195986978" Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.953416 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb63d41-453e-4a48-94f6-5db048e7975f","Type":"ContainerStarted","Data":"643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4"} Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.986416 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.552550984 podStartE2EDuration="6.986397463s" podCreationTimestamp="2025-12-11 08:45:49 +0000 UTC" firstStartedPulling="2025-12-11 08:45:51.076939164 +0000 UTC m=+1375.336413090" lastFinishedPulling="2025-12-11 08:45:54.510785643 +0000 UTC m=+1378.770259569" observedRunningTime="2025-12-11 08:45:55.966071249 +0000 UTC m=+1380.225545185" watchObservedRunningTime="2025-12-11 08:45:55.986397463 +0000 UTC m=+1380.245871389" Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.986649 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerStarted","Data":"d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4"} Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.986691 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerStarted","Data":"b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f"} Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.988414 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83ec9f41-2ca5-4d1e-b876-f986326499f8","Type":"ContainerStarted","Data":"be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038"} Dec 11 08:45:55 crc kubenswrapper[4992]: I1211 08:45:55.988870 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="83ec9f41-2ca5-4d1e-b876-f986326499f8" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038" gracePeriod=30 Dec 11 08:45:56 crc kubenswrapper[4992]: I1211 08:45:56.011092 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.515921284 podStartE2EDuration="7.011072332s" podCreationTimestamp="2025-12-11 08:45:49 +0000 UTC" firstStartedPulling="2025-12-11 08:45:51.039670359 +0000 UTC m=+1375.299144295" lastFinishedPulling="2025-12-11 08:45:54.534821417 +0000 UTC m=+1378.794295343" observedRunningTime="2025-12-11 08:45:55.988839772 +0000 UTC m=+1380.248313738" watchObservedRunningTime="2025-12-11 08:45:56.011072332 +0000 UTC m=+1380.270546258" Dec 11 08:45:56 crc kubenswrapper[4992]: I1211 08:45:56.018597 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.287901998 podStartE2EDuration="7.018577545s" podCreationTimestamp="2025-12-11 08:45:49 +0000 UTC" firstStartedPulling="2025-12-11 08:45:50.743332944 +0000 UTC m=+1375.002806870" lastFinishedPulling="2025-12-11 08:45:54.474008491 +0000 UTC m=+1378.733482417" observedRunningTime="2025-12-11 08:45:56.003112429 +0000 UTC m=+1380.262586355" watchObservedRunningTime="2025-12-11 08:45:56.018577545 +0000 UTC m=+1380.278051471" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.019283 4992 generic.go:334] "Generic (PLEG): container finished" podID="03693c6c-5abe-4912-a367-85f9913399db" containerID="06e41b92bab7ec56c330be8407c64e6b429869492882633361da98960df722ec" exitCode=0 Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.019885 4992 generic.go:334] "Generic (PLEG): container finished" podID="03693c6c-5abe-4912-a367-85f9913399db" containerID="fb5cc07d3544674972fd36edcf1e0b877dd67b53335732909f5de3173a102cc6" exitCode=143 Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.019381 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03693c6c-5abe-4912-a367-85f9913399db","Type":"ContainerDied","Data":"06e41b92bab7ec56c330be8407c64e6b429869492882633361da98960df722ec"} Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.020652 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03693c6c-5abe-4912-a367-85f9913399db","Type":"ContainerDied","Data":"fb5cc07d3544674972fd36edcf1e0b877dd67b53335732909f5de3173a102cc6"} Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.020674 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"03693c6c-5abe-4912-a367-85f9913399db","Type":"ContainerDied","Data":"1d914af6f969eed0247c103de6c733028cf137a2743310c9123814ae049ada0c"} Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.020685 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d914af6f969eed0247c103de6c733028cf137a2743310c9123814ae049ada0c" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.068546 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.224729 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-config-data\") pod \"03693c6c-5abe-4912-a367-85f9913399db\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.224862 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-combined-ca-bundle\") pod \"03693c6c-5abe-4912-a367-85f9913399db\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.224955 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03693c6c-5abe-4912-a367-85f9913399db-logs\") pod \"03693c6c-5abe-4912-a367-85f9913399db\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.225097 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp9pj\" (UniqueName: \"kubernetes.io/projected/03693c6c-5abe-4912-a367-85f9913399db-kube-api-access-bp9pj\") pod \"03693c6c-5abe-4912-a367-85f9913399db\" (UID: \"03693c6c-5abe-4912-a367-85f9913399db\") " Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.226975 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03693c6c-5abe-4912-a367-85f9913399db-logs" (OuterVolumeSpecName: "logs") pod "03693c6c-5abe-4912-a367-85f9913399db" (UID: "03693c6c-5abe-4912-a367-85f9913399db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.229768 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03693c6c-5abe-4912-a367-85f9913399db-kube-api-access-bp9pj" (OuterVolumeSpecName: "kube-api-access-bp9pj") pod "03693c6c-5abe-4912-a367-85f9913399db" (UID: "03693c6c-5abe-4912-a367-85f9913399db"). InnerVolumeSpecName "kube-api-access-bp9pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.252832 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03693c6c-5abe-4912-a367-85f9913399db" (UID: "03693c6c-5abe-4912-a367-85f9913399db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.259834 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-config-data" (OuterVolumeSpecName: "config-data") pod "03693c6c-5abe-4912-a367-85f9913399db" (UID: "03693c6c-5abe-4912-a367-85f9913399db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.328911 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp9pj\" (UniqueName: \"kubernetes.io/projected/03693c6c-5abe-4912-a367-85f9913399db-kube-api-access-bp9pj\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.328956 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.328971 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03693c6c-5abe-4912-a367-85f9913399db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:57 crc kubenswrapper[4992]: I1211 08:45:57.328985 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03693c6c-5abe-4912-a367-85f9913399db-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.034341 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.034353 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerStarted","Data":"be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1"} Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.034894 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.066673 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.860287734 podStartE2EDuration="10.066627194s" podCreationTimestamp="2025-12-11 08:45:48 +0000 UTC" firstStartedPulling="2025-12-11 08:45:49.895573429 +0000 UTC m=+1374.155047355" lastFinishedPulling="2025-12-11 08:45:57.101912889 +0000 UTC m=+1381.361386815" observedRunningTime="2025-12-11 08:45:58.063087068 +0000 UTC m=+1382.322560994" watchObservedRunningTime="2025-12-11 08:45:58.066627194 +0000 UTC m=+1382.326101130" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.091720 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.130719 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.167711 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:45:58 crc kubenswrapper[4992]: E1211 08:45:58.168453 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03693c6c-5abe-4912-a367-85f9913399db" containerName="nova-metadata-log" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.168467 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="03693c6c-5abe-4912-a367-85f9913399db" containerName="nova-metadata-log" Dec 11 08:45:58 crc kubenswrapper[4992]: E1211 08:45:58.168487 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03693c6c-5abe-4912-a367-85f9913399db" containerName="nova-metadata-metadata" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.168493 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="03693c6c-5abe-4912-a367-85f9913399db" containerName="nova-metadata-metadata" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.168851 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="03693c6c-5abe-4912-a367-85f9913399db" containerName="nova-metadata-log" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.168892 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="03693c6c-5abe-4912-a367-85f9913399db" containerName="nova-metadata-metadata" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.170562 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.179935 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.179978 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.180628 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.348219 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58v8k\" (UniqueName: \"kubernetes.io/projected/415756d6-e2b6-4308-9094-ce8d95f69d8d-kube-api-access-58v8k\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.348281 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/415756d6-e2b6-4308-9094-ce8d95f69d8d-logs\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.348320 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.348397 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-config-data\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.348466 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.450070 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.450119 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58v8k\" (UniqueName: \"kubernetes.io/projected/415756d6-e2b6-4308-9094-ce8d95f69d8d-kube-api-access-58v8k\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.450142 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/415756d6-e2b6-4308-9094-ce8d95f69d8d-logs\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.450176 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.450246 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-config-data\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.451146 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/415756d6-e2b6-4308-9094-ce8d95f69d8d-logs\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.458347 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-config-data\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.458477 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.460206 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.483933 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58v8k\" (UniqueName: \"kubernetes.io/projected/415756d6-e2b6-4308-9094-ce8d95f69d8d-kube-api-access-58v8k\") pod \"nova-metadata-0\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.503449 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:45:58 crc kubenswrapper[4992]: I1211 08:45:58.971614 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:45:58 crc kubenswrapper[4992]: W1211 08:45:58.995110 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod415756d6_e2b6_4308_9094_ce8d95f69d8d.slice/crio-50425396a71ce131760c7ba025bc3d5a87c7c4f46ce565bf2f67becb4ad6c34d WatchSource:0}: Error finding container 50425396a71ce131760c7ba025bc3d5a87c7c4f46ce565bf2f67becb4ad6c34d: Status 404 returned error can't find the container with id 50425396a71ce131760c7ba025bc3d5a87c7c4f46ce565bf2f67becb4ad6c34d Dec 11 08:45:59 crc kubenswrapper[4992]: I1211 08:45:59.046745 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"415756d6-e2b6-4308-9094-ce8d95f69d8d","Type":"ContainerStarted","Data":"50425396a71ce131760c7ba025bc3d5a87c7c4f46ce565bf2f67becb4ad6c34d"} Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.060610 4992 generic.go:334] "Generic (PLEG): container finished" podID="e5d5ca90-15ec-4992-b942-c8d63cd82ea6" containerID="1b14cbf457af962f9c8b2c30b2b84e577e1b6977f4b88290314fd9d29c4638aa" exitCode=0 Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.060740 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dl5hv" event={"ID":"e5d5ca90-15ec-4992-b942-c8d63cd82ea6","Type":"ContainerDied","Data":"1b14cbf457af962f9c8b2c30b2b84e577e1b6977f4b88290314fd9d29c4638aa"} Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.066107 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"415756d6-e2b6-4308-9094-ce8d95f69d8d","Type":"ContainerStarted","Data":"96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466"} Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.066157 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"415756d6-e2b6-4308-9094-ce8d95f69d8d","Type":"ContainerStarted","Data":"20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81"} Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.111228 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.111195 podStartE2EDuration="2.111195s" podCreationTimestamp="2025-12-11 08:45:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:00.098027899 +0000 UTC m=+1384.357501895" watchObservedRunningTime="2025-12-11 08:46:00.111195 +0000 UTC m=+1384.370668926" Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.112541 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03693c6c-5abe-4912-a367-85f9913399db" path="/var/lib/kubelet/pods/03693c6c-5abe-4912-a367-85f9913399db/volumes" Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.180604 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.328118 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.328474 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.348412 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.365536 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.444570 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-wxwh2"] Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.444933 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" podUID="79144990-622b-4d1b-8f2d-26707a7a6bd2" containerName="dnsmasq-dns" containerID="cri-o://b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e" gracePeriod=10 Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.469177 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 08:46:00 crc kubenswrapper[4992]: I1211 08:46:00.469226 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.014963 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.097738 4992 generic.go:334] "Generic (PLEG): container finished" podID="79144990-622b-4d1b-8f2d-26707a7a6bd2" containerID="b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e" exitCode=0 Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.097821 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" event={"ID":"79144990-622b-4d1b-8f2d-26707a7a6bd2","Type":"ContainerDied","Data":"b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e"} Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.097856 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" event={"ID":"79144990-622b-4d1b-8f2d-26707a7a6bd2","Type":"ContainerDied","Data":"e2ab64e57c7e8b1e17272870889a57500262f9cefba63ed8c5a08ff944ee0d19"} Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.097875 4992 scope.go:117] "RemoveContainer" containerID="b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.098035 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-wxwh2" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.103044 4992 generic.go:334] "Generic (PLEG): container finished" podID="765acb41-3b1b-4ff0-a57e-9334876b8750" containerID="27ca317d24e17faf4d99bfb2df1f2645a6a5e4a67c43cd8450cc3b61f210c631" exitCode=0 Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.103360 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9mjph" event={"ID":"765acb41-3b1b-4ff0-a57e-9334876b8750","Type":"ContainerDied","Data":"27ca317d24e17faf4d99bfb2df1f2645a6a5e4a67c43cd8450cc3b61f210c631"} Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.109242 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvzbz\" (UniqueName: \"kubernetes.io/projected/79144990-622b-4d1b-8f2d-26707a7a6bd2-kube-api-access-jvzbz\") pod \"79144990-622b-4d1b-8f2d-26707a7a6bd2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.109480 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-sb\") pod \"79144990-622b-4d1b-8f2d-26707a7a6bd2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.109746 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-swift-storage-0\") pod \"79144990-622b-4d1b-8f2d-26707a7a6bd2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.110183 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-config\") pod \"79144990-622b-4d1b-8f2d-26707a7a6bd2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.110377 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-nb\") pod \"79144990-622b-4d1b-8f2d-26707a7a6bd2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.111011 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-svc\") pod \"79144990-622b-4d1b-8f2d-26707a7a6bd2\" (UID: \"79144990-622b-4d1b-8f2d-26707a7a6bd2\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.129220 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79144990-622b-4d1b-8f2d-26707a7a6bd2-kube-api-access-jvzbz" (OuterVolumeSpecName: "kube-api-access-jvzbz") pod "79144990-622b-4d1b-8f2d-26707a7a6bd2" (UID: "79144990-622b-4d1b-8f2d-26707a7a6bd2"). InnerVolumeSpecName "kube-api-access-jvzbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.143294 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.148243 4992 scope.go:117] "RemoveContainer" containerID="ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.192202 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-config" (OuterVolumeSpecName: "config") pod "79144990-622b-4d1b-8f2d-26707a7a6bd2" (UID: "79144990-622b-4d1b-8f2d-26707a7a6bd2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.215133 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79144990-622b-4d1b-8f2d-26707a7a6bd2" (UID: "79144990-622b-4d1b-8f2d-26707a7a6bd2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.215188 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79144990-622b-4d1b-8f2d-26707a7a6bd2" (UID: "79144990-622b-4d1b-8f2d-26707a7a6bd2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.216371 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.216465 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.216524 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvzbz\" (UniqueName: \"kubernetes.io/projected/79144990-622b-4d1b-8f2d-26707a7a6bd2-kube-api-access-jvzbz\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.216581 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.231114 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79144990-622b-4d1b-8f2d-26707a7a6bd2" (UID: "79144990-622b-4d1b-8f2d-26707a7a6bd2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.231377 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79144990-622b-4d1b-8f2d-26707a7a6bd2" (UID: "79144990-622b-4d1b-8f2d-26707a7a6bd2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.317278 4992 scope.go:117] "RemoveContainer" containerID="b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e" Dec 11 08:46:01 crc kubenswrapper[4992]: E1211 08:46:01.317946 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e\": container with ID starting with b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e not found: ID does not exist" containerID="b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.317983 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e"} err="failed to get container status \"b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e\": rpc error: code = NotFound desc = could not find container \"b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e\": container with ID starting with b148d22760850ffb23b7ba6bda714cf807c2f1fbcfb15640205234b886db4f9e not found: ID does not exist" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.318288 4992 scope.go:117] "RemoveContainer" containerID="ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.318673 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.318704 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79144990-622b-4d1b-8f2d-26707a7a6bd2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:01 crc kubenswrapper[4992]: E1211 08:46:01.318873 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c\": container with ID starting with ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c not found: ID does not exist" containerID="ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.318910 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c"} err="failed to get container status \"ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c\": rpc error: code = NotFound desc = could not find container \"ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c\": container with ID starting with ba60cecc9595f961a9d3497409e14c35f9e896ef9f3ffdcf7f6246b7a40c6c6c not found: ID does not exist" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.451349 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-wxwh2"] Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.458974 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-wxwh2"] Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.479019 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.550800 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.550800 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.622755 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4srk\" (UniqueName: \"kubernetes.io/projected/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-kube-api-access-b4srk\") pod \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.622885 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-scripts\") pod \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.622989 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-config-data\") pod \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.623018 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-combined-ca-bundle\") pod \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\" (UID: \"e5d5ca90-15ec-4992-b942-c8d63cd82ea6\") " Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.627833 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-scripts" (OuterVolumeSpecName: "scripts") pod "e5d5ca90-15ec-4992-b942-c8d63cd82ea6" (UID: "e5d5ca90-15ec-4992-b942-c8d63cd82ea6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.627853 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-kube-api-access-b4srk" (OuterVolumeSpecName: "kube-api-access-b4srk") pod "e5d5ca90-15ec-4992-b942-c8d63cd82ea6" (UID: "e5d5ca90-15ec-4992-b942-c8d63cd82ea6"). InnerVolumeSpecName "kube-api-access-b4srk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.654948 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-config-data" (OuterVolumeSpecName: "config-data") pod "e5d5ca90-15ec-4992-b942-c8d63cd82ea6" (UID: "e5d5ca90-15ec-4992-b942-c8d63cd82ea6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.666959 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5d5ca90-15ec-4992-b942-c8d63cd82ea6" (UID: "e5d5ca90-15ec-4992-b942-c8d63cd82ea6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.726064 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4srk\" (UniqueName: \"kubernetes.io/projected/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-kube-api-access-b4srk\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.726111 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.726124 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:01 crc kubenswrapper[4992]: I1211 08:46:01.726135 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d5ca90-15ec-4992-b942-c8d63cd82ea6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.106731 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79144990-622b-4d1b-8f2d-26707a7a6bd2" path="/var/lib/kubelet/pods/79144990-622b-4d1b-8f2d-26707a7a6bd2/volumes" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.113050 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dl5hv" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.113083 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dl5hv" event={"ID":"e5d5ca90-15ec-4992-b942-c8d63cd82ea6","Type":"ContainerDied","Data":"56cd1791b84d69714466c89a6dac706efce1cd5a88b92bb616be72386698cb24"} Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.113147 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56cd1791b84d69714466c89a6dac706efce1cd5a88b92bb616be72386698cb24" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.231695 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.231889 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-log" containerID="cri-o://3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5" gracePeriod=30 Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.232284 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-api" containerID="cri-o://c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69" gracePeriod=30 Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.266674 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.295913 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.296112 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerName="nova-metadata-log" containerID="cri-o://20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81" gracePeriod=30 Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.296528 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerName="nova-metadata-metadata" containerID="cri-o://96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466" gracePeriod=30 Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.546541 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.644487 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-combined-ca-bundle\") pod \"765acb41-3b1b-4ff0-a57e-9334876b8750\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.644698 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2nq6\" (UniqueName: \"kubernetes.io/projected/765acb41-3b1b-4ff0-a57e-9334876b8750-kube-api-access-p2nq6\") pod \"765acb41-3b1b-4ff0-a57e-9334876b8750\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.644829 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-config-data\") pod \"765acb41-3b1b-4ff0-a57e-9334876b8750\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.644901 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-scripts\") pod \"765acb41-3b1b-4ff0-a57e-9334876b8750\" (UID: \"765acb41-3b1b-4ff0-a57e-9334876b8750\") " Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.653456 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-scripts" (OuterVolumeSpecName: "scripts") pod "765acb41-3b1b-4ff0-a57e-9334876b8750" (UID: "765acb41-3b1b-4ff0-a57e-9334876b8750"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.687838 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765acb41-3b1b-4ff0-a57e-9334876b8750-kube-api-access-p2nq6" (OuterVolumeSpecName: "kube-api-access-p2nq6") pod "765acb41-3b1b-4ff0-a57e-9334876b8750" (UID: "765acb41-3b1b-4ff0-a57e-9334876b8750"). InnerVolumeSpecName "kube-api-access-p2nq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.731198 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "765acb41-3b1b-4ff0-a57e-9334876b8750" (UID: "765acb41-3b1b-4ff0-a57e-9334876b8750"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.733695 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-config-data" (OuterVolumeSpecName: "config-data") pod "765acb41-3b1b-4ff0-a57e-9334876b8750" (UID: "765acb41-3b1b-4ff0-a57e-9334876b8750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.748089 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2nq6\" (UniqueName: \"kubernetes.io/projected/765acb41-3b1b-4ff0-a57e-9334876b8750-kube-api-access-p2nq6\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.748126 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.748164 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.748178 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765acb41-3b1b-4ff0-a57e-9334876b8750-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.854707 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.950244 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/415756d6-e2b6-4308-9094-ce8d95f69d8d-logs\") pod \"415756d6-e2b6-4308-9094-ce8d95f69d8d\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.950355 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-config-data\") pod \"415756d6-e2b6-4308-9094-ce8d95f69d8d\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.950418 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58v8k\" (UniqueName: \"kubernetes.io/projected/415756d6-e2b6-4308-9094-ce8d95f69d8d-kube-api-access-58v8k\") pod \"415756d6-e2b6-4308-9094-ce8d95f69d8d\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.950497 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-combined-ca-bundle\") pod \"415756d6-e2b6-4308-9094-ce8d95f69d8d\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.950578 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-nova-metadata-tls-certs\") pod \"415756d6-e2b6-4308-9094-ce8d95f69d8d\" (UID: \"415756d6-e2b6-4308-9094-ce8d95f69d8d\") " Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.954124 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/415756d6-e2b6-4308-9094-ce8d95f69d8d-logs" (OuterVolumeSpecName: "logs") pod "415756d6-e2b6-4308-9094-ce8d95f69d8d" (UID: "415756d6-e2b6-4308-9094-ce8d95f69d8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.963354 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415756d6-e2b6-4308-9094-ce8d95f69d8d-kube-api-access-58v8k" (OuterVolumeSpecName: "kube-api-access-58v8k") pod "415756d6-e2b6-4308-9094-ce8d95f69d8d" (UID: "415756d6-e2b6-4308-9094-ce8d95f69d8d"). InnerVolumeSpecName "kube-api-access-58v8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.989783 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-config-data" (OuterVolumeSpecName: "config-data") pod "415756d6-e2b6-4308-9094-ce8d95f69d8d" (UID: "415756d6-e2b6-4308-9094-ce8d95f69d8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:02 crc kubenswrapper[4992]: I1211 08:46:02.992680 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "415756d6-e2b6-4308-9094-ce8d95f69d8d" (UID: "415756d6-e2b6-4308-9094-ce8d95f69d8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.018509 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "415756d6-e2b6-4308-9094-ce8d95f69d8d" (UID: "415756d6-e2b6-4308-9094-ce8d95f69d8d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.052555 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.052585 4992 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.052596 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/415756d6-e2b6-4308-9094-ce8d95f69d8d-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.052605 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415756d6-e2b6-4308-9094-ce8d95f69d8d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.052614 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58v8k\" (UniqueName: \"kubernetes.io/projected/415756d6-e2b6-4308-9094-ce8d95f69d8d-kube-api-access-58v8k\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.122888 4992 generic.go:334] "Generic (PLEG): container finished" podID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerID="96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466" exitCode=0 Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.122924 4992 generic.go:334] "Generic (PLEG): container finished" podID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerID="20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81" exitCode=143 Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.122978 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"415756d6-e2b6-4308-9094-ce8d95f69d8d","Type":"ContainerDied","Data":"96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466"} Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.122981 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.123010 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"415756d6-e2b6-4308-9094-ce8d95f69d8d","Type":"ContainerDied","Data":"20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81"} Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.123026 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"415756d6-e2b6-4308-9094-ce8d95f69d8d","Type":"ContainerDied","Data":"50425396a71ce131760c7ba025bc3d5a87c7c4f46ce565bf2f67becb4ad6c34d"} Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.123044 4992 scope.go:117] "RemoveContainer" containerID="96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.126010 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9mjph" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.126018 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9mjph" event={"ID":"765acb41-3b1b-4ff0-a57e-9334876b8750","Type":"ContainerDied","Data":"cd64077d94392d6fc2f6496f50a6628d27be371930b0bc6ab5b7bcf1d1e1ff64"} Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.126258 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd64077d94392d6fc2f6496f50a6628d27be371930b0bc6ab5b7bcf1d1e1ff64" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.139126 4992 generic.go:334] "Generic (PLEG): container finished" podID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerID="3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5" exitCode=143 Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.139215 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84f93850-ff06-42e9-b1e3-07f0800312d9","Type":"ContainerDied","Data":"3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5"} Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.145932 4992 scope.go:117] "RemoveContainer" containerID="20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.166565 4992 scope.go:117] "RemoveContainer" containerID="96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466" Dec 11 08:46:03 crc kubenswrapper[4992]: E1211 08:46:03.167563 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466\": container with ID starting with 96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466 not found: ID does not exist" containerID="96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.167669 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466"} err="failed to get container status \"96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466\": rpc error: code = NotFound desc = could not find container \"96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466\": container with ID starting with 96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466 not found: ID does not exist" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.167815 4992 scope.go:117] "RemoveContainer" containerID="20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81" Dec 11 08:46:03 crc kubenswrapper[4992]: E1211 08:46:03.168617 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81\": container with ID starting with 20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81 not found: ID does not exist" containerID="20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.168782 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81"} err="failed to get container status \"20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81\": rpc error: code = NotFound desc = could not find container \"20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81\": container with ID starting with 20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81 not found: ID does not exist" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.168858 4992 scope.go:117] "RemoveContainer" containerID="96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.169119 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466"} err="failed to get container status \"96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466\": rpc error: code = NotFound desc = could not find container \"96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466\": container with ID starting with 96abad71ce4f0764a4c987150c252fc0e6eb1972490b3e0615ac13d11db5a466 not found: ID does not exist" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.169203 4992 scope.go:117] "RemoveContainer" containerID="20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.170852 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81"} err="failed to get container status \"20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81\": rpc error: code = NotFound desc = could not find container \"20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81\": container with ID starting with 20da08d417c40bca2c19c0306e01287bdb79c18aabdc9eda3f2fc025cc518d81 not found: ID does not exist" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.181728 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.190025 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.268115 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:03 crc kubenswrapper[4992]: E1211 08:46:03.268723 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765acb41-3b1b-4ff0-a57e-9334876b8750" containerName="nova-cell1-conductor-db-sync" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.268740 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="765acb41-3b1b-4ff0-a57e-9334876b8750" containerName="nova-cell1-conductor-db-sync" Dec 11 08:46:03 crc kubenswrapper[4992]: E1211 08:46:03.268773 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79144990-622b-4d1b-8f2d-26707a7a6bd2" containerName="dnsmasq-dns" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.268780 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="79144990-622b-4d1b-8f2d-26707a7a6bd2" containerName="dnsmasq-dns" Dec 11 08:46:03 crc kubenswrapper[4992]: E1211 08:46:03.268791 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d5ca90-15ec-4992-b942-c8d63cd82ea6" containerName="nova-manage" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.268798 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d5ca90-15ec-4992-b942-c8d63cd82ea6" containerName="nova-manage" Dec 11 08:46:03 crc kubenswrapper[4992]: E1211 08:46:03.268822 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerName="nova-metadata-metadata" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.268829 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerName="nova-metadata-metadata" Dec 11 08:46:03 crc kubenswrapper[4992]: E1211 08:46:03.268851 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerName="nova-metadata-log" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.268859 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerName="nova-metadata-log" Dec 11 08:46:03 crc kubenswrapper[4992]: E1211 08:46:03.268877 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79144990-622b-4d1b-8f2d-26707a7a6bd2" containerName="init" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.268882 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="79144990-622b-4d1b-8f2d-26707a7a6bd2" containerName="init" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.269219 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="79144990-622b-4d1b-8f2d-26707a7a6bd2" containerName="dnsmasq-dns" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.269241 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerName="nova-metadata-log" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.269250 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="765acb41-3b1b-4ff0-a57e-9334876b8750" containerName="nova-cell1-conductor-db-sync" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.269279 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d5ca90-15ec-4992-b942-c8d63cd82ea6" containerName="nova-manage" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.269299 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="415756d6-e2b6-4308-9094-ce8d95f69d8d" containerName="nova-metadata-metadata" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.291544 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.301024 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.304901 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.331046 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.388599 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.390146 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.396068 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.417476 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.459795 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5c47ec-8f6a-4af9-b762-104573dc7a27-logs\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.459850 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.459932 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xkql\" (UniqueName: \"kubernetes.io/projected/1d5c47ec-8f6a-4af9-b762-104573dc7a27-kube-api-access-2xkql\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.459952 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.460014 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-config-data\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.561935 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc\") " pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.562015 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xkql\" (UniqueName: \"kubernetes.io/projected/1d5c47ec-8f6a-4af9-b762-104573dc7a27-kube-api-access-2xkql\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.562049 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.562142 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc\") " pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.562175 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmdd7\" (UniqueName: \"kubernetes.io/projected/f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc-kube-api-access-qmdd7\") pod \"nova-cell1-conductor-0\" (UID: \"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc\") " pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.562203 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-config-data\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.562285 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5c47ec-8f6a-4af9-b762-104573dc7a27-logs\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.562318 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.562781 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5c47ec-8f6a-4af9-b762-104573dc7a27-logs\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.566231 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.566266 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.566371 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-config-data\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.585263 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xkql\" (UniqueName: \"kubernetes.io/projected/1d5c47ec-8f6a-4af9-b762-104573dc7a27-kube-api-access-2xkql\") pod \"nova-metadata-0\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.637662 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.664830 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc\") " pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.664951 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc\") " pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.664984 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmdd7\" (UniqueName: \"kubernetes.io/projected/f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc-kube-api-access-qmdd7\") pod \"nova-cell1-conductor-0\" (UID: \"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc\") " pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.671280 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc\") " pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.671429 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc\") " pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.683332 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmdd7\" (UniqueName: \"kubernetes.io/projected/f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc-kube-api-access-qmdd7\") pod \"nova-cell1-conductor-0\" (UID: \"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc\") " pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.717507 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:03 crc kubenswrapper[4992]: I1211 08:46:03.973853 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:03 crc kubenswrapper[4992]: W1211 08:46:03.980895 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d5c47ec_8f6a_4af9_b762_104573dc7a27.slice/crio-cfe6420824a3474567bcec493247cd6fece9ec44600f318428041577e48e05b4 WatchSource:0}: Error finding container cfe6420824a3474567bcec493247cd6fece9ec44600f318428041577e48e05b4: Status 404 returned error can't find the container with id cfe6420824a3474567bcec493247cd6fece9ec44600f318428041577e48e05b4 Dec 11 08:46:04 crc kubenswrapper[4992]: I1211 08:46:04.113638 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415756d6-e2b6-4308-9094-ce8d95f69d8d" path="/var/lib/kubelet/pods/415756d6-e2b6-4308-9094-ce8d95f69d8d/volumes" Dec 11 08:46:04 crc kubenswrapper[4992]: I1211 08:46:04.150115 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d5c47ec-8f6a-4af9-b762-104573dc7a27","Type":"ContainerStarted","Data":"fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b"} Dec 11 08:46:04 crc kubenswrapper[4992]: I1211 08:46:04.150171 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d5c47ec-8f6a-4af9-b762-104573dc7a27","Type":"ContainerStarted","Data":"cfe6420824a3474567bcec493247cd6fece9ec44600f318428041577e48e05b4"} Dec 11 08:46:04 crc kubenswrapper[4992]: I1211 08:46:04.150266 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2fb63d41-453e-4a48-94f6-5db048e7975f" containerName="nova-scheduler-scheduler" containerID="cri-o://643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4" gracePeriod=30 Dec 11 08:46:04 crc kubenswrapper[4992]: I1211 08:46:04.241132 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 08:46:04 crc kubenswrapper[4992]: W1211 08:46:04.242889 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf22ebfe6_970d_45f1_b5e1_c85baf4c7dbc.slice/crio-0071d06489d8fa1d9bd59682ece330bf2697cf8e19766e68765d1636b3de0f61 WatchSource:0}: Error finding container 0071d06489d8fa1d9bd59682ece330bf2697cf8e19766e68765d1636b3de0f61: Status 404 returned error can't find the container with id 0071d06489d8fa1d9bd59682ece330bf2697cf8e19766e68765d1636b3de0f61 Dec 11 08:46:05 crc kubenswrapper[4992]: I1211 08:46:05.161175 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d5c47ec-8f6a-4af9-b762-104573dc7a27","Type":"ContainerStarted","Data":"a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7"} Dec 11 08:46:05 crc kubenswrapper[4992]: I1211 08:46:05.163677 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc","Type":"ContainerStarted","Data":"dbf4ac4c04678262e51f9799edbbbcb662f4106402c9771e8233a35ca3251c90"} Dec 11 08:46:05 crc kubenswrapper[4992]: I1211 08:46:05.163722 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc","Type":"ContainerStarted","Data":"0071d06489d8fa1d9bd59682ece330bf2697cf8e19766e68765d1636b3de0f61"} Dec 11 08:46:05 crc kubenswrapper[4992]: I1211 08:46:05.163850 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:05 crc kubenswrapper[4992]: I1211 08:46:05.197288 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.197266227 podStartE2EDuration="2.197266227s" podCreationTimestamp="2025-12-11 08:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:05.184967318 +0000 UTC m=+1389.444441254" watchObservedRunningTime="2025-12-11 08:46:05.197266227 +0000 UTC m=+1389.456740163" Dec 11 08:46:05 crc kubenswrapper[4992]: I1211 08:46:05.211696 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.2116204760000002 podStartE2EDuration="2.211620476s" podCreationTimestamp="2025-12-11 08:46:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:05.206958262 +0000 UTC m=+1389.466432208" watchObservedRunningTime="2025-12-11 08:46:05.211620476 +0000 UTC m=+1389.471094442" Dec 11 08:46:05 crc kubenswrapper[4992]: E1211 08:46:05.330381 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 08:46:05 crc kubenswrapper[4992]: E1211 08:46:05.331730 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 08:46:05 crc kubenswrapper[4992]: E1211 08:46:05.332606 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 08:46:05 crc kubenswrapper[4992]: E1211 08:46:05.332678 4992 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2fb63d41-453e-4a48-94f6-5db048e7975f" containerName="nova-scheduler-scheduler" Dec 11 08:46:05 crc kubenswrapper[4992]: I1211 08:46:05.379261 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:46:05 crc kubenswrapper[4992]: I1211 08:46:05.379318 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:46:06 crc kubenswrapper[4992]: I1211 08:46:06.968169 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.129764 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-config-data\") pod \"2fb63d41-453e-4a48-94f6-5db048e7975f\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.129906 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rntl6\" (UniqueName: \"kubernetes.io/projected/2fb63d41-453e-4a48-94f6-5db048e7975f-kube-api-access-rntl6\") pod \"2fb63d41-453e-4a48-94f6-5db048e7975f\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.129947 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-combined-ca-bundle\") pod \"2fb63d41-453e-4a48-94f6-5db048e7975f\" (UID: \"2fb63d41-453e-4a48-94f6-5db048e7975f\") " Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.143478 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb63d41-453e-4a48-94f6-5db048e7975f-kube-api-access-rntl6" (OuterVolumeSpecName: "kube-api-access-rntl6") pod "2fb63d41-453e-4a48-94f6-5db048e7975f" (UID: "2fb63d41-453e-4a48-94f6-5db048e7975f"). InnerVolumeSpecName "kube-api-access-rntl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.161092 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-config-data" (OuterVolumeSpecName: "config-data") pod "2fb63d41-453e-4a48-94f6-5db048e7975f" (UID: "2fb63d41-453e-4a48-94f6-5db048e7975f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.161224 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fb63d41-453e-4a48-94f6-5db048e7975f" (UID: "2fb63d41-453e-4a48-94f6-5db048e7975f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.180901 4992 generic.go:334] "Generic (PLEG): container finished" podID="2fb63d41-453e-4a48-94f6-5db048e7975f" containerID="643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4" exitCode=0 Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.180946 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb63d41-453e-4a48-94f6-5db048e7975f","Type":"ContainerDied","Data":"643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4"} Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.180975 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2fb63d41-453e-4a48-94f6-5db048e7975f","Type":"ContainerDied","Data":"f1c58bfd36645732144951eee8834ad99258ea93a113dddabda8e8f40b030dca"} Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.180992 4992 scope.go:117] "RemoveContainer" containerID="643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.181099 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.232048 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.232081 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rntl6\" (UniqueName: \"kubernetes.io/projected/2fb63d41-453e-4a48-94f6-5db048e7975f-kube-api-access-rntl6\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.232092 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fb63d41-453e-4a48-94f6-5db048e7975f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.240885 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.242979 4992 scope.go:117] "RemoveContainer" containerID="643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4" Dec 11 08:46:07 crc kubenswrapper[4992]: E1211 08:46:07.243674 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4\": container with ID starting with 643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4 not found: ID does not exist" containerID="643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.243721 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4"} err="failed to get container status \"643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4\": rpc error: code = NotFound desc = could not find container \"643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4\": container with ID starting with 643a1b90eb76f32fa59834dfc8456f935d9babafe8f60a213c298e492c5b8aa4 not found: ID does not exist" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.252015 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.269744 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:07 crc kubenswrapper[4992]: E1211 08:46:07.270242 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb63d41-453e-4a48-94f6-5db048e7975f" containerName="nova-scheduler-scheduler" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.270268 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb63d41-453e-4a48-94f6-5db048e7975f" containerName="nova-scheduler-scheduler" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.270527 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb63d41-453e-4a48-94f6-5db048e7975f" containerName="nova-scheduler-scheduler" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.271292 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.273575 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.281435 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.435168 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zczg8\" (UniqueName: \"kubernetes.io/projected/28c9189d-379d-4e21-9ae4-0b6df4951889-kube-api-access-zczg8\") pod \"nova-scheduler-0\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.435507 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.435714 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-config-data\") pod \"nova-scheduler-0\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.537921 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-config-data\") pod \"nova-scheduler-0\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.537993 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zczg8\" (UniqueName: \"kubernetes.io/projected/28c9189d-379d-4e21-9ae4-0b6df4951889-kube-api-access-zczg8\") pod \"nova-scheduler-0\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.538059 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.542489 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-config-data\") pod \"nova-scheduler-0\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.542770 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.558357 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zczg8\" (UniqueName: \"kubernetes.io/projected/28c9189d-379d-4e21-9ae4-0b6df4951889-kube-api-access-zczg8\") pod \"nova-scheduler-0\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:07 crc kubenswrapper[4992]: I1211 08:46:07.602748 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.060238 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.100930 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.114195 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb63d41-453e-4a48-94f6-5db048e7975f" path="/var/lib/kubelet/pods/2fb63d41-453e-4a48-94f6-5db048e7975f/volumes" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.193552 4992 generic.go:334] "Generic (PLEG): container finished" podID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerID="c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69" exitCode=0 Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.193594 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.193650 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84f93850-ff06-42e9-b1e3-07f0800312d9","Type":"ContainerDied","Data":"c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69"} Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.193683 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84f93850-ff06-42e9-b1e3-07f0800312d9","Type":"ContainerDied","Data":"32efa4cedc09b4b4357c53e029b3299586ff7b21b657043b9a5ffd0e8c293177"} Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.193703 4992 scope.go:117] "RemoveContainer" containerID="c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.198401 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28c9189d-379d-4e21-9ae4-0b6df4951889","Type":"ContainerStarted","Data":"8d5310ac099f49449b930e2f2a6255692ae8232372ef5bdfb11ee398c9e18f25"} Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.216917 4992 scope.go:117] "RemoveContainer" containerID="3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.235840 4992 scope.go:117] "RemoveContainer" containerID="c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69" Dec 11 08:46:08 crc kubenswrapper[4992]: E1211 08:46:08.236186 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69\": container with ID starting with c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69 not found: ID does not exist" containerID="c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.236224 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69"} err="failed to get container status \"c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69\": rpc error: code = NotFound desc = could not find container \"c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69\": container with ID starting with c9a0a569bff957ebdbb88cd942aa03d7ef7f132dc11d663396f3f6fcbe624b69 not found: ID does not exist" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.236248 4992 scope.go:117] "RemoveContainer" containerID="3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5" Dec 11 08:46:08 crc kubenswrapper[4992]: E1211 08:46:08.236546 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5\": container with ID starting with 3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5 not found: ID does not exist" containerID="3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.236576 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5"} err="failed to get container status \"3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5\": rpc error: code = NotFound desc = could not find container \"3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5\": container with ID starting with 3a77b229a1a9e73fd19d88660a213d3e9a4e16b1471efec5e9dd8e41fcefc5a5 not found: ID does not exist" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.251794 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-config-data\") pod \"84f93850-ff06-42e9-b1e3-07f0800312d9\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.251862 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-combined-ca-bundle\") pod \"84f93850-ff06-42e9-b1e3-07f0800312d9\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.251907 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84f93850-ff06-42e9-b1e3-07f0800312d9-logs\") pod \"84f93850-ff06-42e9-b1e3-07f0800312d9\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.252029 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4k87c\" (UniqueName: \"kubernetes.io/projected/84f93850-ff06-42e9-b1e3-07f0800312d9-kube-api-access-4k87c\") pod \"84f93850-ff06-42e9-b1e3-07f0800312d9\" (UID: \"84f93850-ff06-42e9-b1e3-07f0800312d9\") " Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.254668 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84f93850-ff06-42e9-b1e3-07f0800312d9-logs" (OuterVolumeSpecName: "logs") pod "84f93850-ff06-42e9-b1e3-07f0800312d9" (UID: "84f93850-ff06-42e9-b1e3-07f0800312d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.257435 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84f93850-ff06-42e9-b1e3-07f0800312d9-kube-api-access-4k87c" (OuterVolumeSpecName: "kube-api-access-4k87c") pod "84f93850-ff06-42e9-b1e3-07f0800312d9" (UID: "84f93850-ff06-42e9-b1e3-07f0800312d9"). InnerVolumeSpecName "kube-api-access-4k87c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.277963 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-config-data" (OuterVolumeSpecName: "config-data") pod "84f93850-ff06-42e9-b1e3-07f0800312d9" (UID: "84f93850-ff06-42e9-b1e3-07f0800312d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.280539 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84f93850-ff06-42e9-b1e3-07f0800312d9" (UID: "84f93850-ff06-42e9-b1e3-07f0800312d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.354694 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.354736 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f93850-ff06-42e9-b1e3-07f0800312d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.354753 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84f93850-ff06-42e9-b1e3-07f0800312d9-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.354765 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4k87c\" (UniqueName: \"kubernetes.io/projected/84f93850-ff06-42e9-b1e3-07f0800312d9-kube-api-access-4k87c\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.526718 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.538830 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.549597 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:08 crc kubenswrapper[4992]: E1211 08:46:08.550261 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-log" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.550295 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-log" Dec 11 08:46:08 crc kubenswrapper[4992]: E1211 08:46:08.550317 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-api" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.550328 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-api" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.550608 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-api" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.550710 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" containerName="nova-api-log" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.552057 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.554959 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.570885 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.637826 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.638947 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.660023 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.660098 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lc9s\" (UniqueName: \"kubernetes.io/projected/0f771c72-ebb5-423a-a133-2a843afc0afe-kube-api-access-8lc9s\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.660231 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-config-data\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.660261 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f771c72-ebb5-423a-a133-2a843afc0afe-logs\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.761671 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.761729 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lc9s\" (UniqueName: \"kubernetes.io/projected/0f771c72-ebb5-423a-a133-2a843afc0afe-kube-api-access-8lc9s\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.761862 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-config-data\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.761905 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f771c72-ebb5-423a-a133-2a843afc0afe-logs\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.762290 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f771c72-ebb5-423a-a133-2a843afc0afe-logs\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.774466 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.775200 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-config-data\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.781733 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lc9s\" (UniqueName: \"kubernetes.io/projected/0f771c72-ebb5-423a-a133-2a843afc0afe-kube-api-access-8lc9s\") pod \"nova-api-0\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " pod="openstack/nova-api-0" Dec 11 08:46:08 crc kubenswrapper[4992]: I1211 08:46:08.954393 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:09 crc kubenswrapper[4992]: I1211 08:46:09.208613 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28c9189d-379d-4e21-9ae4-0b6df4951889","Type":"ContainerStarted","Data":"b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e"} Dec 11 08:46:09 crc kubenswrapper[4992]: I1211 08:46:09.228134 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.228117342 podStartE2EDuration="2.228117342s" podCreationTimestamp="2025-12-11 08:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:09.22228712 +0000 UTC m=+1393.481761046" watchObservedRunningTime="2025-12-11 08:46:09.228117342 +0000 UTC m=+1393.487591268" Dec 11 08:46:09 crc kubenswrapper[4992]: I1211 08:46:09.408627 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:09 crc kubenswrapper[4992]: W1211 08:46:09.414537 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f771c72_ebb5_423a_a133_2a843afc0afe.slice/crio-d3f974070919d2fc29485b047932a1dfe2aa49dca77560f22aeed9e7ea6305f0 WatchSource:0}: Error finding container d3f974070919d2fc29485b047932a1dfe2aa49dca77560f22aeed9e7ea6305f0: Status 404 returned error can't find the container with id d3f974070919d2fc29485b047932a1dfe2aa49dca77560f22aeed9e7ea6305f0 Dec 11 08:46:10 crc kubenswrapper[4992]: I1211 08:46:10.107494 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84f93850-ff06-42e9-b1e3-07f0800312d9" path="/var/lib/kubelet/pods/84f93850-ff06-42e9-b1e3-07f0800312d9/volumes" Dec 11 08:46:10 crc kubenswrapper[4992]: I1211 08:46:10.221526 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f771c72-ebb5-423a-a133-2a843afc0afe","Type":"ContainerStarted","Data":"82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e"} Dec 11 08:46:10 crc kubenswrapper[4992]: I1211 08:46:10.221576 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f771c72-ebb5-423a-a133-2a843afc0afe","Type":"ContainerStarted","Data":"e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597"} Dec 11 08:46:10 crc kubenswrapper[4992]: I1211 08:46:10.221589 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f771c72-ebb5-423a-a133-2a843afc0afe","Type":"ContainerStarted","Data":"d3f974070919d2fc29485b047932a1dfe2aa49dca77560f22aeed9e7ea6305f0"} Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.536476 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.5364561820000002 podStartE2EDuration="3.536456182s" podCreationTimestamp="2025-12-11 08:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:10.245713281 +0000 UTC m=+1394.505187217" watchObservedRunningTime="2025-12-11 08:46:11.536456182 +0000 UTC m=+1395.795930118" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.542838 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2vxxw"] Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.544952 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.553880 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vxxw"] Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.638532 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-utilities\") pod \"redhat-operators-2vxxw\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.638937 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7tml\" (UniqueName: \"kubernetes.io/projected/abc79732-a1d1-417e-9835-7f9ae9709d8c-kube-api-access-v7tml\") pod \"redhat-operators-2vxxw\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.639132 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-catalog-content\") pod \"redhat-operators-2vxxw\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.741727 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-catalog-content\") pod \"redhat-operators-2vxxw\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.741914 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-utilities\") pod \"redhat-operators-2vxxw\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.741953 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7tml\" (UniqueName: \"kubernetes.io/projected/abc79732-a1d1-417e-9835-7f9ae9709d8c-kube-api-access-v7tml\") pod \"redhat-operators-2vxxw\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.742340 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-utilities\") pod \"redhat-operators-2vxxw\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.742340 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-catalog-content\") pod \"redhat-operators-2vxxw\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.765557 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7tml\" (UniqueName: \"kubernetes.io/projected/abc79732-a1d1-417e-9835-7f9ae9709d8c-kube-api-access-v7tml\") pod \"redhat-operators-2vxxw\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:11 crc kubenswrapper[4992]: I1211 08:46:11.864891 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:12 crc kubenswrapper[4992]: I1211 08:46:12.308293 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vxxw"] Dec 11 08:46:12 crc kubenswrapper[4992]: W1211 08:46:12.312044 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabc79732_a1d1_417e_9835_7f9ae9709d8c.slice/crio-b8e9744dcf8017357ddcce4466e2c18e3f529441bf0b315e5e0bb085e73beec1 WatchSource:0}: Error finding container b8e9744dcf8017357ddcce4466e2c18e3f529441bf0b315e5e0bb085e73beec1: Status 404 returned error can't find the container with id b8e9744dcf8017357ddcce4466e2c18e3f529441bf0b315e5e0bb085e73beec1 Dec 11 08:46:12 crc kubenswrapper[4992]: I1211 08:46:12.603494 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 08:46:13 crc kubenswrapper[4992]: I1211 08:46:13.257907 4992 generic.go:334] "Generic (PLEG): container finished" podID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerID="a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325" exitCode=0 Dec 11 08:46:13 crc kubenswrapper[4992]: I1211 08:46:13.257966 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vxxw" event={"ID":"abc79732-a1d1-417e-9835-7f9ae9709d8c","Type":"ContainerDied","Data":"a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325"} Dec 11 08:46:13 crc kubenswrapper[4992]: I1211 08:46:13.257992 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vxxw" event={"ID":"abc79732-a1d1-417e-9835-7f9ae9709d8c","Type":"ContainerStarted","Data":"b8e9744dcf8017357ddcce4466e2c18e3f529441bf0b315e5e0bb085e73beec1"} Dec 11 08:46:13 crc kubenswrapper[4992]: I1211 08:46:13.638250 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 08:46:13 crc kubenswrapper[4992]: I1211 08:46:13.638335 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 08:46:13 crc kubenswrapper[4992]: I1211 08:46:13.747352 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 11 08:46:14 crc kubenswrapper[4992]: I1211 08:46:14.277387 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vxxw" event={"ID":"abc79732-a1d1-417e-9835-7f9ae9709d8c","Type":"ContainerStarted","Data":"598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a"} Dec 11 08:46:14 crc kubenswrapper[4992]: I1211 08:46:14.651830 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 08:46:14 crc kubenswrapper[4992]: I1211 08:46:14.651852 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 08:46:16 crc kubenswrapper[4992]: I1211 08:46:16.297211 4992 generic.go:334] "Generic (PLEG): container finished" podID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerID="598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a" exitCode=0 Dec 11 08:46:16 crc kubenswrapper[4992]: I1211 08:46:16.297500 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vxxw" event={"ID":"abc79732-a1d1-417e-9835-7f9ae9709d8c","Type":"ContainerDied","Data":"598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a"} Dec 11 08:46:17 crc kubenswrapper[4992]: I1211 08:46:17.603044 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 08:46:17 crc kubenswrapper[4992]: I1211 08:46:17.630839 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 08:46:18 crc kubenswrapper[4992]: I1211 08:46:18.318893 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vxxw" event={"ID":"abc79732-a1d1-417e-9835-7f9ae9709d8c","Type":"ContainerStarted","Data":"35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1"} Dec 11 08:46:18 crc kubenswrapper[4992]: I1211 08:46:18.344173 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2vxxw" podStartSLOduration=2.894581891 podStartE2EDuration="7.344154883s" podCreationTimestamp="2025-12-11 08:46:11 +0000 UTC" firstStartedPulling="2025-12-11 08:46:13.260935675 +0000 UTC m=+1397.520409601" lastFinishedPulling="2025-12-11 08:46:17.710508667 +0000 UTC m=+1401.969982593" observedRunningTime="2025-12-11 08:46:18.338588378 +0000 UTC m=+1402.598062304" watchObservedRunningTime="2025-12-11 08:46:18.344154883 +0000 UTC m=+1402.603628809" Dec 11 08:46:18 crc kubenswrapper[4992]: I1211 08:46:18.354741 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 08:46:18 crc kubenswrapper[4992]: I1211 08:46:18.955519 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 08:46:18 crc kubenswrapper[4992]: I1211 08:46:18.955597 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 08:46:19 crc kubenswrapper[4992]: I1211 08:46:19.316538 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 08:46:20 crc kubenswrapper[4992]: I1211 08:46:20.038852 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 08:46:20 crc kubenswrapper[4992]: I1211 08:46:20.038852 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 08:46:21 crc kubenswrapper[4992]: I1211 08:46:21.865506 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:21 crc kubenswrapper[4992]: I1211 08:46:21.865857 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:22 crc kubenswrapper[4992]: I1211 08:46:22.909009 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2vxxw" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerName="registry-server" probeResult="failure" output=< Dec 11 08:46:22 crc kubenswrapper[4992]: timeout: failed to connect service ":50051" within 1s Dec 11 08:46:22 crc kubenswrapper[4992]: > Dec 11 08:46:22 crc kubenswrapper[4992]: I1211 08:46:22.967788 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 08:46:22 crc kubenswrapper[4992]: I1211 08:46:22.968216 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f99cf716-c024-485a-8d47-20218de1cb10" containerName="kube-state-metrics" containerID="cri-o://b2c192bbc9693174279048dbd4aaf94b35fbc1cc5590bf74ee51197729b6b44e" gracePeriod=30 Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.362132 4992 generic.go:334] "Generic (PLEG): container finished" podID="f99cf716-c024-485a-8d47-20218de1cb10" containerID="b2c192bbc9693174279048dbd4aaf94b35fbc1cc5590bf74ee51197729b6b44e" exitCode=2 Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.362176 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f99cf716-c024-485a-8d47-20218de1cb10","Type":"ContainerDied","Data":"b2c192bbc9693174279048dbd4aaf94b35fbc1cc5590bf74ee51197729b6b44e"} Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.457269 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.553695 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szfwz\" (UniqueName: \"kubernetes.io/projected/f99cf716-c024-485a-8d47-20218de1cb10-kube-api-access-szfwz\") pod \"f99cf716-c024-485a-8d47-20218de1cb10\" (UID: \"f99cf716-c024-485a-8d47-20218de1cb10\") " Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.567883 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f99cf716-c024-485a-8d47-20218de1cb10-kube-api-access-szfwz" (OuterVolumeSpecName: "kube-api-access-szfwz") pod "f99cf716-c024-485a-8d47-20218de1cb10" (UID: "f99cf716-c024-485a-8d47-20218de1cb10"). InnerVolumeSpecName "kube-api-access-szfwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.644519 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.644586 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.652210 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.654605 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 08:46:23 crc kubenswrapper[4992]: I1211 08:46:23.655596 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szfwz\" (UniqueName: \"kubernetes.io/projected/f99cf716-c024-485a-8d47-20218de1cb10-kube-api-access-szfwz\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.374729 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.375209 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f99cf716-c024-485a-8d47-20218de1cb10","Type":"ContainerDied","Data":"9ef4f23e1481705b5ec964be07baae3c2ebaa1b6bb97a059dad3b9a78e52b9df"} Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.375238 4992 scope.go:117] "RemoveContainer" containerID="b2c192bbc9693174279048dbd4aaf94b35fbc1cc5590bf74ee51197729b6b44e" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.398734 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.407863 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.418364 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 08:46:24 crc kubenswrapper[4992]: E1211 08:46:24.418828 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f99cf716-c024-485a-8d47-20218de1cb10" containerName="kube-state-metrics" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.418846 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f99cf716-c024-485a-8d47-20218de1cb10" containerName="kube-state-metrics" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.419032 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f99cf716-c024-485a-8d47-20218de1cb10" containerName="kube-state-metrics" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.419681 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.421617 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.427222 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.458098 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.573113 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46c3920-de00-4d05-9a50-406b7efd3b8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.573266 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c46c3920-de00-4d05-9a50-406b7efd3b8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.573321 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxp6\" (UniqueName: \"kubernetes.io/projected/c46c3920-de00-4d05-9a50-406b7efd3b8d-kube-api-access-2rxp6\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.573338 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c46c3920-de00-4d05-9a50-406b7efd3b8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.675312 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46c3920-de00-4d05-9a50-406b7efd3b8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.675472 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c46c3920-de00-4d05-9a50-406b7efd3b8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.675535 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxp6\" (UniqueName: \"kubernetes.io/projected/c46c3920-de00-4d05-9a50-406b7efd3b8d-kube-api-access-2rxp6\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.675559 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c46c3920-de00-4d05-9a50-406b7efd3b8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.680071 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c46c3920-de00-4d05-9a50-406b7efd3b8d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.680317 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46c3920-de00-4d05-9a50-406b7efd3b8d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.684439 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c46c3920-de00-4d05-9a50-406b7efd3b8d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.692434 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxp6\" (UniqueName: \"kubernetes.io/projected/c46c3920-de00-4d05-9a50-406b7efd3b8d-kube-api-access-2rxp6\") pod \"kube-state-metrics-0\" (UID: \"c46c3920-de00-4d05-9a50-406b7efd3b8d\") " pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.780920 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.984051 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.984534 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="ceilometer-central-agent" containerID="cri-o://3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b" gracePeriod=30 Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.984993 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="proxy-httpd" containerID="cri-o://be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1" gracePeriod=30 Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.985042 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="sg-core" containerID="cri-o://d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4" gracePeriod=30 Dec 11 08:46:24 crc kubenswrapper[4992]: I1211 08:46:24.985076 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="ceilometer-notification-agent" containerID="cri-o://b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f" gracePeriod=30 Dec 11 08:46:25 crc kubenswrapper[4992]: I1211 08:46:25.328392 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 08:46:25 crc kubenswrapper[4992]: W1211 08:46:25.333572 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc46c3920_de00_4d05_9a50_406b7efd3b8d.slice/crio-07b2fa8ad7a2a8e7eb2edafedb09c40219f44d8e51a765e736c438a1c4c64729 WatchSource:0}: Error finding container 07b2fa8ad7a2a8e7eb2edafedb09c40219f44d8e51a765e736c438a1c4c64729: Status 404 returned error can't find the container with id 07b2fa8ad7a2a8e7eb2edafedb09c40219f44d8e51a765e736c438a1c4c64729 Dec 11 08:46:25 crc kubenswrapper[4992]: I1211 08:46:25.390414 4992 generic.go:334] "Generic (PLEG): container finished" podID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerID="be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1" exitCode=0 Dec 11 08:46:25 crc kubenswrapper[4992]: I1211 08:46:25.390902 4992 generic.go:334] "Generic (PLEG): container finished" podID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerID="d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4" exitCode=2 Dec 11 08:46:25 crc kubenswrapper[4992]: I1211 08:46:25.390914 4992 generic.go:334] "Generic (PLEG): container finished" podID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerID="3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b" exitCode=0 Dec 11 08:46:25 crc kubenswrapper[4992]: I1211 08:46:25.390629 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerDied","Data":"be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1"} Dec 11 08:46:25 crc kubenswrapper[4992]: I1211 08:46:25.390977 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerDied","Data":"d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4"} Dec 11 08:46:25 crc kubenswrapper[4992]: I1211 08:46:25.390990 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerDied","Data":"3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b"} Dec 11 08:46:25 crc kubenswrapper[4992]: I1211 08:46:25.392765 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c46c3920-de00-4d05-9a50-406b7efd3b8d","Type":"ContainerStarted","Data":"07b2fa8ad7a2a8e7eb2edafedb09c40219f44d8e51a765e736c438a1c4c64729"} Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.113520 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f99cf716-c024-485a-8d47-20218de1cb10" path="/var/lib/kubelet/pods/f99cf716-c024-485a-8d47-20218de1cb10/volumes" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.347830 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.401524 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c46c3920-de00-4d05-9a50-406b7efd3b8d","Type":"ContainerStarted","Data":"e5c16bdce69f9014624cb4fb664919f87aefbfdc60984d99536fc84d722cfad6"} Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.402529 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.404452 4992 generic.go:334] "Generic (PLEG): container finished" podID="83ec9f41-2ca5-4d1e-b876-f986326499f8" containerID="be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038" exitCode=137 Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.404491 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83ec9f41-2ca5-4d1e-b876-f986326499f8","Type":"ContainerDied","Data":"be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038"} Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.404513 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"83ec9f41-2ca5-4d1e-b876-f986326499f8","Type":"ContainerDied","Data":"80004c6bd027ecef2a01d95b76c09a65a391c7d6399ab6ca6a55f6435c81a098"} Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.404531 4992 scope.go:117] "RemoveContainer" containerID="be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.404689 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.431455 4992 scope.go:117] "RemoveContainer" containerID="be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038" Dec 11 08:46:26 crc kubenswrapper[4992]: E1211 08:46:26.432330 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038\": container with ID starting with be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038 not found: ID does not exist" containerID="be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.432366 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038"} err="failed to get container status \"be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038\": rpc error: code = NotFound desc = could not find container \"be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038\": container with ID starting with be872bdedc9811fd486a294a2f2162eeb8f4283cf8fb7bc543b15a0c3c749038 not found: ID does not exist" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.432625 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.056589362 podStartE2EDuration="2.432613602s" podCreationTimestamp="2025-12-11 08:46:24 +0000 UTC" firstStartedPulling="2025-12-11 08:46:25.337549232 +0000 UTC m=+1409.597023158" lastFinishedPulling="2025-12-11 08:46:25.713573482 +0000 UTC m=+1409.973047398" observedRunningTime="2025-12-11 08:46:26.417082565 +0000 UTC m=+1410.676556491" watchObservedRunningTime="2025-12-11 08:46:26.432613602 +0000 UTC m=+1410.692087528" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.515442 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-combined-ca-bundle\") pod \"83ec9f41-2ca5-4d1e-b876-f986326499f8\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.515519 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-config-data\") pod \"83ec9f41-2ca5-4d1e-b876-f986326499f8\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.515816 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42k47\" (UniqueName: \"kubernetes.io/projected/83ec9f41-2ca5-4d1e-b876-f986326499f8-kube-api-access-42k47\") pod \"83ec9f41-2ca5-4d1e-b876-f986326499f8\" (UID: \"83ec9f41-2ca5-4d1e-b876-f986326499f8\") " Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.526550 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ec9f41-2ca5-4d1e-b876-f986326499f8-kube-api-access-42k47" (OuterVolumeSpecName: "kube-api-access-42k47") pod "83ec9f41-2ca5-4d1e-b876-f986326499f8" (UID: "83ec9f41-2ca5-4d1e-b876-f986326499f8"). InnerVolumeSpecName "kube-api-access-42k47". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.541521 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-config-data" (OuterVolumeSpecName: "config-data") pod "83ec9f41-2ca5-4d1e-b876-f986326499f8" (UID: "83ec9f41-2ca5-4d1e-b876-f986326499f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.547039 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83ec9f41-2ca5-4d1e-b876-f986326499f8" (UID: "83ec9f41-2ca5-4d1e-b876-f986326499f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.618587 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42k47\" (UniqueName: \"kubernetes.io/projected/83ec9f41-2ca5-4d1e-b876-f986326499f8-kube-api-access-42k47\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.618652 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.618673 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83ec9f41-2ca5-4d1e-b876-f986326499f8-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.758440 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.773799 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.786258 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 08:46:26 crc kubenswrapper[4992]: E1211 08:46:26.786663 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ec9f41-2ca5-4d1e-b876-f986326499f8" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.786674 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ec9f41-2ca5-4d1e-b876-f986326499f8" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.786901 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ec9f41-2ca5-4d1e-b876-f986326499f8" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.787562 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.790238 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.790512 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.790726 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.798369 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.923978 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.924118 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvgt\" (UniqueName: \"kubernetes.io/projected/34747d0d-221e-453b-9685-2e0ce24f21ff-kube-api-access-dkvgt\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.924472 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.924849 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:26 crc kubenswrapper[4992]: I1211 08:46:26.925085 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.027201 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.027656 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.027686 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvgt\" (UniqueName: \"kubernetes.io/projected/34747d0d-221e-453b-9685-2e0ce24f21ff-kube-api-access-dkvgt\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.027735 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.027814 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.031186 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.031841 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.032788 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.051884 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34747d0d-221e-453b-9685-2e0ce24f21ff-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.053409 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvgt\" (UniqueName: \"kubernetes.io/projected/34747d0d-221e-453b-9685-2e0ce24f21ff-kube-api-access-dkvgt\") pod \"nova-cell1-novncproxy-0\" (UID: \"34747d0d-221e-453b-9685-2e0ce24f21ff\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.114034 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.191527 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.333319 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-run-httpd\") pod \"33e1f655-4ae5-4bc7-8829-501d73b23615\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.333692 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7b8l\" (UniqueName: \"kubernetes.io/projected/33e1f655-4ae5-4bc7-8829-501d73b23615-kube-api-access-t7b8l\") pod \"33e1f655-4ae5-4bc7-8829-501d73b23615\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.333787 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-config-data\") pod \"33e1f655-4ae5-4bc7-8829-501d73b23615\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.333864 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-combined-ca-bundle\") pod \"33e1f655-4ae5-4bc7-8829-501d73b23615\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.333860 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "33e1f655-4ae5-4bc7-8829-501d73b23615" (UID: "33e1f655-4ae5-4bc7-8829-501d73b23615"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.333897 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-log-httpd\") pod \"33e1f655-4ae5-4bc7-8829-501d73b23615\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.333958 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-scripts\") pod \"33e1f655-4ae5-4bc7-8829-501d73b23615\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.333993 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-sg-core-conf-yaml\") pod \"33e1f655-4ae5-4bc7-8829-501d73b23615\" (UID: \"33e1f655-4ae5-4bc7-8829-501d73b23615\") " Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.334530 4992 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.334844 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "33e1f655-4ae5-4bc7-8829-501d73b23615" (UID: "33e1f655-4ae5-4bc7-8829-501d73b23615"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.341578 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-scripts" (OuterVolumeSpecName: "scripts") pod "33e1f655-4ae5-4bc7-8829-501d73b23615" (UID: "33e1f655-4ae5-4bc7-8829-501d73b23615"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.341890 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e1f655-4ae5-4bc7-8829-501d73b23615-kube-api-access-t7b8l" (OuterVolumeSpecName: "kube-api-access-t7b8l") pod "33e1f655-4ae5-4bc7-8829-501d73b23615" (UID: "33e1f655-4ae5-4bc7-8829-501d73b23615"). InnerVolumeSpecName "kube-api-access-t7b8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.386413 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "33e1f655-4ae5-4bc7-8829-501d73b23615" (UID: "33e1f655-4ae5-4bc7-8829-501d73b23615"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.423722 4992 generic.go:334] "Generic (PLEG): container finished" podID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerID="b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f" exitCode=0 Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.423786 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerDied","Data":"b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f"} Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.423812 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"33e1f655-4ae5-4bc7-8829-501d73b23615","Type":"ContainerDied","Data":"cac6fb8a721e75856c70d58ff4907acdc81bbafa2186a3d520bd2f8e995e30a7"} Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.423828 4992 scope.go:117] "RemoveContainer" containerID="be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.423968 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.436032 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.436069 4992 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.436081 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7b8l\" (UniqueName: \"kubernetes.io/projected/33e1f655-4ae5-4bc7-8829-501d73b23615-kube-api-access-t7b8l\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.436092 4992 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/33e1f655-4ae5-4bc7-8829-501d73b23615-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.436292 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.454266 4992 scope.go:117] "RemoveContainer" containerID="d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.462214 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33e1f655-4ae5-4bc7-8829-501d73b23615" (UID: "33e1f655-4ae5-4bc7-8829-501d73b23615"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.471960 4992 scope.go:117] "RemoveContainer" containerID="b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.480557 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-config-data" (OuterVolumeSpecName: "config-data") pod "33e1f655-4ae5-4bc7-8829-501d73b23615" (UID: "33e1f655-4ae5-4bc7-8829-501d73b23615"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.511992 4992 scope.go:117] "RemoveContainer" containerID="3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.534989 4992 scope.go:117] "RemoveContainer" containerID="be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1" Dec 11 08:46:27 crc kubenswrapper[4992]: E1211 08:46:27.535664 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1\": container with ID starting with be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1 not found: ID does not exist" containerID="be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.535715 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1"} err="failed to get container status \"be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1\": rpc error: code = NotFound desc = could not find container \"be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1\": container with ID starting with be6582f2c8675a9fe7c04195d64c84a2323f752c6d1ee33a1f8ebac69de4d9d1 not found: ID does not exist" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.535748 4992 scope.go:117] "RemoveContainer" containerID="d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4" Dec 11 08:46:27 crc kubenswrapper[4992]: E1211 08:46:27.536153 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4\": container with ID starting with d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4 not found: ID does not exist" containerID="d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.536184 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4"} err="failed to get container status \"d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4\": rpc error: code = NotFound desc = could not find container \"d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4\": container with ID starting with d937924b89a3610ca99840f664c798a6b6e85ddd5c205a541363a6a9393248e4 not found: ID does not exist" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.536216 4992 scope.go:117] "RemoveContainer" containerID="b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f" Dec 11 08:46:27 crc kubenswrapper[4992]: E1211 08:46:27.536452 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f\": container with ID starting with b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f not found: ID does not exist" containerID="b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.536486 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f"} err="failed to get container status \"b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f\": rpc error: code = NotFound desc = could not find container \"b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f\": container with ID starting with b3b46602eabda77f209facc7a1fcabc41c3a988ff262b0ad05ffd0674e6df28f not found: ID does not exist" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.536510 4992 scope.go:117] "RemoveContainer" containerID="3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b" Dec 11 08:46:27 crc kubenswrapper[4992]: E1211 08:46:27.536676 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b\": container with ID starting with 3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b not found: ID does not exist" containerID="3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.536696 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b"} err="failed to get container status \"3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b\": rpc error: code = NotFound desc = could not find container \"3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b\": container with ID starting with 3462c4202f247a43730749f8f50634b19f038fd41e3e87b0a7d483ddfbc66a3b not found: ID does not exist" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.537448 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.537891 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e1f655-4ae5-4bc7-8829-501d73b23615-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.790114 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.805660 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.834115 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:46:27 crc kubenswrapper[4992]: E1211 08:46:27.835079 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="sg-core" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.835095 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="sg-core" Dec 11 08:46:27 crc kubenswrapper[4992]: E1211 08:46:27.835118 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="ceilometer-central-agent" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.835125 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="ceilometer-central-agent" Dec 11 08:46:27 crc kubenswrapper[4992]: E1211 08:46:27.835148 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="ceilometer-notification-agent" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.835154 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="ceilometer-notification-agent" Dec 11 08:46:27 crc kubenswrapper[4992]: E1211 08:46:27.835181 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="proxy-httpd" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.835191 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="proxy-httpd" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.835612 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="ceilometer-central-agent" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.835652 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="ceilometer-notification-agent" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.835679 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="proxy-httpd" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.835695 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" containerName="sg-core" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.839222 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.848167 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.848502 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.848889 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.851865 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.951682 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-log-httpd\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.951740 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.951815 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-scripts\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.951868 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74w44\" (UniqueName: \"kubernetes.io/projected/29ba6ac6-907c-45c1-98b6-ee952adb74b1-kube-api-access-74w44\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.951900 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-run-httpd\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.951981 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.952028 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:27 crc kubenswrapper[4992]: I1211 08:46:27.952090 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-config-data\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.054115 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.054253 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-config-data\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.054360 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-log-httpd\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.054910 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-log-httpd\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.054983 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.055053 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-scripts\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.055136 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74w44\" (UniqueName: \"kubernetes.io/projected/29ba6ac6-907c-45c1-98b6-ee952adb74b1-kube-api-access-74w44\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.055190 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-run-httpd\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.055886 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-run-httpd\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.056708 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.057867 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.058563 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-config-data\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.060291 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.060363 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.060888 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-scripts\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.083345 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74w44\" (UniqueName: \"kubernetes.io/projected/29ba6ac6-907c-45c1-98b6-ee952adb74b1-kube-api-access-74w44\") pod \"ceilometer-0\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.111107 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e1f655-4ae5-4bc7-8829-501d73b23615" path="/var/lib/kubelet/pods/33e1f655-4ae5-4bc7-8829-501d73b23615/volumes" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.111868 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83ec9f41-2ca5-4d1e-b876-f986326499f8" path="/var/lib/kubelet/pods/83ec9f41-2ca5-4d1e-b876-f986326499f8/volumes" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.180696 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.440269 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"34747d0d-221e-453b-9685-2e0ce24f21ff","Type":"ContainerStarted","Data":"e97844866eb9aed1cedd56ed6c0e26973e3d282fe82602fc0db90e6a474156b6"} Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.440609 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"34747d0d-221e-453b-9685-2e0ce24f21ff","Type":"ContainerStarted","Data":"0f09d76c6c4b7437196716521e481d08817376bf4e5f66a5b29ff8be5180cfe6"} Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.460077 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.460058891 podStartE2EDuration="2.460058891s" podCreationTimestamp="2025-12-11 08:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:28.456603757 +0000 UTC m=+1412.716077693" watchObservedRunningTime="2025-12-11 08:46:28.460058891 +0000 UTC m=+1412.719532817" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.627240 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:46:28 crc kubenswrapper[4992]: W1211 08:46:28.630742 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29ba6ac6_907c_45c1_98b6_ee952adb74b1.slice/crio-66b4f46c8866ef4fe517d95221436a2947d23cb41dd3c2b9a92fd9519851d113 WatchSource:0}: Error finding container 66b4f46c8866ef4fe517d95221436a2947d23cb41dd3c2b9a92fd9519851d113: Status 404 returned error can't find the container with id 66b4f46c8866ef4fe517d95221436a2947d23cb41dd3c2b9a92fd9519851d113 Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.958948 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.960544 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.960670 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 08:46:28 crc kubenswrapper[4992]: I1211 08:46:28.969660 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.450930 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerStarted","Data":"cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3"} Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.451519 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerStarted","Data":"66b4f46c8866ef4fe517d95221436a2947d23cb41dd3c2b9a92fd9519851d113"} Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.451565 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.454839 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.700421 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-tzl55"] Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.702378 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.732259 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-tzl55"] Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.806818 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-config\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.806911 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.806955 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bttc7\" (UniqueName: \"kubernetes.io/projected/daf07ee3-b6e9-4d16-972c-9df83d121006-kube-api-access-bttc7\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.807010 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.807236 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.807264 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.908486 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.908975 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.909081 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-config\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.909140 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.909177 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bttc7\" (UniqueName: \"kubernetes.io/projected/daf07ee3-b6e9-4d16-972c-9df83d121006-kube-api-access-bttc7\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.909245 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.909826 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.909988 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.911216 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-config\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.911402 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.918133 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:29 crc kubenswrapper[4992]: I1211 08:46:29.933275 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bttc7\" (UniqueName: \"kubernetes.io/projected/daf07ee3-b6e9-4d16-972c-9df83d121006-kube-api-access-bttc7\") pod \"dnsmasq-dns-59cf4bdb65-tzl55\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:30 crc kubenswrapper[4992]: I1211 08:46:30.031171 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:30 crc kubenswrapper[4992]: I1211 08:46:30.466825 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerStarted","Data":"f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da"} Dec 11 08:46:30 crc kubenswrapper[4992]: I1211 08:46:30.557522 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-tzl55"] Dec 11 08:46:31 crc kubenswrapper[4992]: I1211 08:46:31.343316 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:46:31 crc kubenswrapper[4992]: I1211 08:46:31.476463 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" event={"ID":"daf07ee3-b6e9-4d16-972c-9df83d121006","Type":"ContainerDied","Data":"69c80914ff00ec3ea30b5a00fa3ead0f2b2b214b18ea3b2ac18b6eb428935f52"} Dec 11 08:46:31 crc kubenswrapper[4992]: I1211 08:46:31.478498 4992 generic.go:334] "Generic (PLEG): container finished" podID="daf07ee3-b6e9-4d16-972c-9df83d121006" containerID="69c80914ff00ec3ea30b5a00fa3ead0f2b2b214b18ea3b2ac18b6eb428935f52" exitCode=0 Dec 11 08:46:31 crc kubenswrapper[4992]: I1211 08:46:31.478734 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" event={"ID":"daf07ee3-b6e9-4d16-972c-9df83d121006","Type":"ContainerStarted","Data":"a50f60c0a931794d6ac61bfcb168af4c5720e3a218783689fb61ba083bde0fa2"} Dec 11 08:46:31 crc kubenswrapper[4992]: I1211 08:46:31.486714 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerStarted","Data":"319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646"} Dec 11 08:46:31 crc kubenswrapper[4992]: I1211 08:46:31.912542 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:31 crc kubenswrapper[4992]: I1211 08:46:31.961020 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.114506 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.155311 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vxxw"] Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.497057 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.502362 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerStarted","Data":"cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1"} Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.502569 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="ceilometer-notification-agent" containerID="cri-o://f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da" gracePeriod=30 Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.502585 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="sg-core" containerID="cri-o://319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646" gracePeriod=30 Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.502536 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="proxy-httpd" containerID="cri-o://cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1" gracePeriod=30 Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.502895 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.502681 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="ceilometer-central-agent" containerID="cri-o://cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3" gracePeriod=30 Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.507847 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" event={"ID":"daf07ee3-b6e9-4d16-972c-9df83d121006","Type":"ContainerStarted","Data":"6a9fc6abaead35ec9384932e100c7af9b6e32b797b7afe3943fd2d6d2403bf7a"} Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.507921 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-log" containerID="cri-o://e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597" gracePeriod=30 Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.508217 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-api" containerID="cri-o://82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e" gracePeriod=30 Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.508664 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.587587 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.988477732 podStartE2EDuration="5.587563053s" podCreationTimestamp="2025-12-11 08:46:27 +0000 UTC" firstStartedPulling="2025-12-11 08:46:28.633440891 +0000 UTC m=+1412.892914817" lastFinishedPulling="2025-12-11 08:46:32.232526212 +0000 UTC m=+1416.492000138" observedRunningTime="2025-12-11 08:46:32.540982433 +0000 UTC m=+1416.800456359" watchObservedRunningTime="2025-12-11 08:46:32.587563053 +0000 UTC m=+1416.847036979" Dec 11 08:46:32 crc kubenswrapper[4992]: I1211 08:46:32.592271 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" podStartSLOduration=3.592257287 podStartE2EDuration="3.592257287s" podCreationTimestamp="2025-12-11 08:46:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:32.585973365 +0000 UTC m=+1416.845447301" watchObservedRunningTime="2025-12-11 08:46:32.592257287 +0000 UTC m=+1416.851731213" Dec 11 08:46:33 crc kubenswrapper[4992]: I1211 08:46:33.519294 4992 generic.go:334] "Generic (PLEG): container finished" podID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerID="e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597" exitCode=143 Dec 11 08:46:33 crc kubenswrapper[4992]: I1211 08:46:33.519373 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f771c72-ebb5-423a-a133-2a843afc0afe","Type":"ContainerDied","Data":"e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597"} Dec 11 08:46:33 crc kubenswrapper[4992]: I1211 08:46:33.522429 4992 generic.go:334] "Generic (PLEG): container finished" podID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerID="319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646" exitCode=2 Dec 11 08:46:33 crc kubenswrapper[4992]: I1211 08:46:33.522460 4992 generic.go:334] "Generic (PLEG): container finished" podID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerID="f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da" exitCode=0 Dec 11 08:46:33 crc kubenswrapper[4992]: I1211 08:46:33.522535 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerDied","Data":"319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646"} Dec 11 08:46:33 crc kubenswrapper[4992]: I1211 08:46:33.522609 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerDied","Data":"f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da"} Dec 11 08:46:33 crc kubenswrapper[4992]: I1211 08:46:33.522722 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2vxxw" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerName="registry-server" containerID="cri-o://35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1" gracePeriod=2 Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.012266 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.200066 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-catalog-content\") pod \"abc79732-a1d1-417e-9835-7f9ae9709d8c\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.200163 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-utilities\") pod \"abc79732-a1d1-417e-9835-7f9ae9709d8c\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.200282 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7tml\" (UniqueName: \"kubernetes.io/projected/abc79732-a1d1-417e-9835-7f9ae9709d8c-kube-api-access-v7tml\") pod \"abc79732-a1d1-417e-9835-7f9ae9709d8c\" (UID: \"abc79732-a1d1-417e-9835-7f9ae9709d8c\") " Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.205904 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-utilities" (OuterVolumeSpecName: "utilities") pod "abc79732-a1d1-417e-9835-7f9ae9709d8c" (UID: "abc79732-a1d1-417e-9835-7f9ae9709d8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.218936 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc79732-a1d1-417e-9835-7f9ae9709d8c-kube-api-access-v7tml" (OuterVolumeSpecName: "kube-api-access-v7tml") pod "abc79732-a1d1-417e-9835-7f9ae9709d8c" (UID: "abc79732-a1d1-417e-9835-7f9ae9709d8c"). InnerVolumeSpecName "kube-api-access-v7tml". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.303309 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.303611 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7tml\" (UniqueName: \"kubernetes.io/projected/abc79732-a1d1-417e-9835-7f9ae9709d8c-kube-api-access-v7tml\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.326876 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abc79732-a1d1-417e-9835-7f9ae9709d8c" (UID: "abc79732-a1d1-417e-9835-7f9ae9709d8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.406172 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abc79732-a1d1-417e-9835-7f9ae9709d8c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.546460 4992 generic.go:334] "Generic (PLEG): container finished" podID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerID="35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1" exitCode=0 Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.546538 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vxxw" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.546542 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vxxw" event={"ID":"abc79732-a1d1-417e-9835-7f9ae9709d8c","Type":"ContainerDied","Data":"35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1"} Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.548892 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vxxw" event={"ID":"abc79732-a1d1-417e-9835-7f9ae9709d8c","Type":"ContainerDied","Data":"b8e9744dcf8017357ddcce4466e2c18e3f529441bf0b315e5e0bb085e73beec1"} Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.548922 4992 scope.go:117] "RemoveContainer" containerID="35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.585409 4992 scope.go:117] "RemoveContainer" containerID="598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.592615 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vxxw"] Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.612810 4992 scope.go:117] "RemoveContainer" containerID="a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.623711 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2vxxw"] Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.663222 4992 scope.go:117] "RemoveContainer" containerID="35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1" Dec 11 08:46:34 crc kubenswrapper[4992]: E1211 08:46:34.663614 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1\": container with ID starting with 35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1 not found: ID does not exist" containerID="35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.663662 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1"} err="failed to get container status \"35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1\": rpc error: code = NotFound desc = could not find container \"35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1\": container with ID starting with 35ecf1812d9fab900ea82bb52cc454f080a65683eb6e18e6fe775c6be46675d1 not found: ID does not exist" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.663687 4992 scope.go:117] "RemoveContainer" containerID="598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a" Dec 11 08:46:34 crc kubenswrapper[4992]: E1211 08:46:34.664245 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a\": container with ID starting with 598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a not found: ID does not exist" containerID="598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.664396 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a"} err="failed to get container status \"598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a\": rpc error: code = NotFound desc = could not find container \"598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a\": container with ID starting with 598b446f7813c9c1aa804b0519eb8fc57110d822946f43769e57cc803edfd74a not found: ID does not exist" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.664538 4992 scope.go:117] "RemoveContainer" containerID="a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325" Dec 11 08:46:34 crc kubenswrapper[4992]: E1211 08:46:34.665134 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325\": container with ID starting with a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325 not found: ID does not exist" containerID="a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.665161 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325"} err="failed to get container status \"a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325\": rpc error: code = NotFound desc = could not find container \"a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325\": container with ID starting with a0d249cfa7dacda220ed2495a99fcb4f88f083915bbff3e6eed61deb09d62325 not found: ID does not exist" Dec 11 08:46:34 crc kubenswrapper[4992]: I1211 08:46:34.790618 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 08:46:35 crc kubenswrapper[4992]: I1211 08:46:35.379017 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:46:35 crc kubenswrapper[4992]: I1211 08:46:35.379085 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:46:35 crc kubenswrapper[4992]: I1211 08:46:35.379131 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:46:35 crc kubenswrapper[4992]: I1211 08:46:35.380015 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c9b7b9751f69dacb432c6111e285d3e2e47bd2a6e7fe288f0f982e3e58b7bafb"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 08:46:35 crc kubenswrapper[4992]: I1211 08:46:35.380077 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://c9b7b9751f69dacb432c6111e285d3e2e47bd2a6e7fe288f0f982e3e58b7bafb" gracePeriod=600 Dec 11 08:46:35 crc kubenswrapper[4992]: I1211 08:46:35.605230 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="c9b7b9751f69dacb432c6111e285d3e2e47bd2a6e7fe288f0f982e3e58b7bafb" exitCode=0 Dec 11 08:46:35 crc kubenswrapper[4992]: I1211 08:46:35.605432 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"c9b7b9751f69dacb432c6111e285d3e2e47bd2a6e7fe288f0f982e3e58b7bafb"} Dec 11 08:46:35 crc kubenswrapper[4992]: I1211 08:46:35.605692 4992 scope.go:117] "RemoveContainer" containerID="d64a7d32f88a68b108a9286da7fc154fed7c669f9f13fdf26c97611e89c34eb5" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.141936 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" path="/var/lib/kubelet/pods/abc79732-a1d1-417e-9835-7f9ae9709d8c/volumes" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.313114 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.453943 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-combined-ca-bundle\") pod \"0f771c72-ebb5-423a-a133-2a843afc0afe\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.454012 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lc9s\" (UniqueName: \"kubernetes.io/projected/0f771c72-ebb5-423a-a133-2a843afc0afe-kube-api-access-8lc9s\") pod \"0f771c72-ebb5-423a-a133-2a843afc0afe\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.454063 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f771c72-ebb5-423a-a133-2a843afc0afe-logs\") pod \"0f771c72-ebb5-423a-a133-2a843afc0afe\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.454133 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-config-data\") pod \"0f771c72-ebb5-423a-a133-2a843afc0afe\" (UID: \"0f771c72-ebb5-423a-a133-2a843afc0afe\") " Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.457109 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f771c72-ebb5-423a-a133-2a843afc0afe-logs" (OuterVolumeSpecName: "logs") pod "0f771c72-ebb5-423a-a133-2a843afc0afe" (UID: "0f771c72-ebb5-423a-a133-2a843afc0afe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.461362 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f771c72-ebb5-423a-a133-2a843afc0afe-kube-api-access-8lc9s" (OuterVolumeSpecName: "kube-api-access-8lc9s") pod "0f771c72-ebb5-423a-a133-2a843afc0afe" (UID: "0f771c72-ebb5-423a-a133-2a843afc0afe"). InnerVolumeSpecName "kube-api-access-8lc9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.490142 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f771c72-ebb5-423a-a133-2a843afc0afe" (UID: "0f771c72-ebb5-423a-a133-2a843afc0afe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.520974 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-config-data" (OuterVolumeSpecName: "config-data") pod "0f771c72-ebb5-423a-a133-2a843afc0afe" (UID: "0f771c72-ebb5-423a-a133-2a843afc0afe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.556512 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.556887 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f771c72-ebb5-423a-a133-2a843afc0afe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.556905 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lc9s\" (UniqueName: \"kubernetes.io/projected/0f771c72-ebb5-423a-a133-2a843afc0afe-kube-api-access-8lc9s\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.556934 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f771c72-ebb5-423a-a133-2a843afc0afe-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.620489 4992 generic.go:334] "Generic (PLEG): container finished" podID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerID="cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3" exitCode=0 Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.620544 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerDied","Data":"cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3"} Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.622997 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69"} Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.625862 4992 generic.go:334] "Generic (PLEG): container finished" podID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerID="82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e" exitCode=0 Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.625900 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f771c72-ebb5-423a-a133-2a843afc0afe","Type":"ContainerDied","Data":"82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e"} Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.625904 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.625922 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f771c72-ebb5-423a-a133-2a843afc0afe","Type":"ContainerDied","Data":"d3f974070919d2fc29485b047932a1dfe2aa49dca77560f22aeed9e7ea6305f0"} Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.625942 4992 scope.go:117] "RemoveContainer" containerID="82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.659134 4992 scope.go:117] "RemoveContainer" containerID="e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.671306 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.688574 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.699816 4992 scope.go:117] "RemoveContainer" containerID="82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e" Dec 11 08:46:36 crc kubenswrapper[4992]: E1211 08:46:36.705273 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e\": container with ID starting with 82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e not found: ID does not exist" containerID="82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.705326 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e"} err="failed to get container status \"82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e\": rpc error: code = NotFound desc = could not find container \"82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e\": container with ID starting with 82058b54bfeeece185fbe7db73f374c9dc09e103d26177718cc0937e6791184e not found: ID does not exist" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.705360 4992 scope.go:117] "RemoveContainer" containerID="e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597" Dec 11 08:46:36 crc kubenswrapper[4992]: E1211 08:46:36.709981 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597\": container with ID starting with e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597 not found: ID does not exist" containerID="e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.710034 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597"} err="failed to get container status \"e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597\": rpc error: code = NotFound desc = could not find container \"e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597\": container with ID starting with e95f52edc19ff044e6a4927b5f8ed450001c347b4804eee02743eaf287c38597 not found: ID does not exist" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.711239 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:36 crc kubenswrapper[4992]: E1211 08:46:36.711754 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-api" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.711774 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-api" Dec 11 08:46:36 crc kubenswrapper[4992]: E1211 08:46:36.711809 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerName="registry-server" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.711816 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerName="registry-server" Dec 11 08:46:36 crc kubenswrapper[4992]: E1211 08:46:36.711842 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerName="extract-utilities" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.711850 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerName="extract-utilities" Dec 11 08:46:36 crc kubenswrapper[4992]: E1211 08:46:36.711873 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerName="extract-content" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.711881 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerName="extract-content" Dec 11 08:46:36 crc kubenswrapper[4992]: E1211 08:46:36.711894 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-log" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.711901 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-log" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.712069 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-api" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.712078 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" containerName="nova-api-log" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.712096 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="abc79732-a1d1-417e-9835-7f9ae9709d8c" containerName="registry-server" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.713866 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.733518 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.735461 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.735730 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.741805 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.761660 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.761729 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8sjk\" (UniqueName: \"kubernetes.io/projected/93f355cd-5fb9-48df-a991-9d5c7924709a-kube-api-access-s8sjk\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.761780 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f355cd-5fb9-48df-a991-9d5c7924709a-logs\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.761832 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-public-tls-certs\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.761903 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-config-data\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.761986 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.864928 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-public-tls-certs\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.864998 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-config-data\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.865061 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.865137 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.865178 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8sjk\" (UniqueName: \"kubernetes.io/projected/93f355cd-5fb9-48df-a991-9d5c7924709a-kube-api-access-s8sjk\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.865230 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f355cd-5fb9-48df-a991-9d5c7924709a-logs\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.866589 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f355cd-5fb9-48df-a991-9d5c7924709a-logs\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.871056 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.872800 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.872972 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-public-tls-certs\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.881440 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-config-data\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:36 crc kubenswrapper[4992]: I1211 08:46:36.885171 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8sjk\" (UniqueName: \"kubernetes.io/projected/93f355cd-5fb9-48df-a991-9d5c7924709a-kube-api-access-s8sjk\") pod \"nova-api-0\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " pod="openstack/nova-api-0" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.061247 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.114667 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.144806 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.578145 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.635112 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93f355cd-5fb9-48df-a991-9d5c7924709a","Type":"ContainerStarted","Data":"466270a28bc061cbc908399ae0b94d701953383b4b7f4678adb5b4b9bc323c0e"} Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.654348 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.824061 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-549zl"] Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.825908 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.830506 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.830762 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.859569 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-549zl"] Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.902800 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-scripts\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.902875 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-config-data\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.902926 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:37 crc kubenswrapper[4992]: I1211 08:46:37.903312 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz979\" (UniqueName: \"kubernetes.io/projected/801312c0-357d-4900-aa2d-fabe849fb634-kube-api-access-kz979\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.005483 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz979\" (UniqueName: \"kubernetes.io/projected/801312c0-357d-4900-aa2d-fabe849fb634-kube-api-access-kz979\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.005587 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-scripts\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.005667 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-config-data\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.005728 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.009851 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-scripts\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.010416 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-config-data\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.012680 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.026002 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz979\" (UniqueName: \"kubernetes.io/projected/801312c0-357d-4900-aa2d-fabe849fb634-kube-api-access-kz979\") pod \"nova-cell1-cell-mapping-549zl\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.028755 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.110768 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f771c72-ebb5-423a-a133-2a843afc0afe" path="/var/lib/kubelet/pods/0f771c72-ebb5-423a-a133-2a843afc0afe/volumes" Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.469033 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-549zl"] Dec 11 08:46:38 crc kubenswrapper[4992]: W1211 08:46:38.481626 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod801312c0_357d_4900_aa2d_fabe849fb634.slice/crio-0f8d49b0ee6dae6ecf2a8324e235012277fbbf2fa34a34775d6fa3e54acf7f3c WatchSource:0}: Error finding container 0f8d49b0ee6dae6ecf2a8324e235012277fbbf2fa34a34775d6fa3e54acf7f3c: Status 404 returned error can't find the container with id 0f8d49b0ee6dae6ecf2a8324e235012277fbbf2fa34a34775d6fa3e54acf7f3c Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.651443 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-549zl" event={"ID":"801312c0-357d-4900-aa2d-fabe849fb634","Type":"ContainerStarted","Data":"0f8d49b0ee6dae6ecf2a8324e235012277fbbf2fa34a34775d6fa3e54acf7f3c"} Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.656652 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93f355cd-5fb9-48df-a991-9d5c7924709a","Type":"ContainerStarted","Data":"92180b1fe174c1b32678a1a5b282f097afedc611e34798df628dec1c6af4c619"} Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.656693 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93f355cd-5fb9-48df-a991-9d5c7924709a","Type":"ContainerStarted","Data":"dbcd517132b6a422470be54215f7a2abb11139786ef457ca137fdd9fa2d34001"} Dec 11 08:46:38 crc kubenswrapper[4992]: I1211 08:46:38.688707 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.688670067 podStartE2EDuration="2.688670067s" podCreationTimestamp="2025-12-11 08:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:38.67273381 +0000 UTC m=+1422.932207766" watchObservedRunningTime="2025-12-11 08:46:38.688670067 +0000 UTC m=+1422.948144003" Dec 11 08:46:39 crc kubenswrapper[4992]: I1211 08:46:39.669907 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-549zl" event={"ID":"801312c0-357d-4900-aa2d-fabe849fb634","Type":"ContainerStarted","Data":"fa0279365301f7b1934bb5d759f88b31923e81cc44e07229edccbf0da87e7a39"} Dec 11 08:46:39 crc kubenswrapper[4992]: I1211 08:46:39.683537 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-549zl" podStartSLOduration=2.683513754 podStartE2EDuration="2.683513754s" podCreationTimestamp="2025-12-11 08:46:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:39.682603661 +0000 UTC m=+1423.942077597" watchObservedRunningTime="2025-12-11 08:46:39.683513754 +0000 UTC m=+1423.942987680" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.032785 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.136196 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-5bg7x"] Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.136504 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" podUID="1762eaac-ace3-46ce-996f-619ab0c4bdae" containerName="dnsmasq-dns" containerID="cri-o://8edb39bda4d87d3a2b6872bd08e9fbe8708057efe489ec90401b1daba8f4cadf" gracePeriod=10 Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.350340 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" podUID="1762eaac-ace3-46ce-996f-619ab0c4bdae" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.188:5353: connect: connection refused" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.682029 4992 generic.go:334] "Generic (PLEG): container finished" podID="1762eaac-ace3-46ce-996f-619ab0c4bdae" containerID="8edb39bda4d87d3a2b6872bd08e9fbe8708057efe489ec90401b1daba8f4cadf" exitCode=0 Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.682088 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" event={"ID":"1762eaac-ace3-46ce-996f-619ab0c4bdae","Type":"ContainerDied","Data":"8edb39bda4d87d3a2b6872bd08e9fbe8708057efe489ec90401b1daba8f4cadf"} Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.682434 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" event={"ID":"1762eaac-ace3-46ce-996f-619ab0c4bdae","Type":"ContainerDied","Data":"32e977ab30f09e9c374ad129b01169c42803e53398214ff620a2c1bbc5331388"} Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.682456 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e977ab30f09e9c374ad129b01169c42803e53398214ff620a2c1bbc5331388" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.713874 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.771426 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-swift-storage-0\") pod \"1762eaac-ace3-46ce-996f-619ab0c4bdae\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.771497 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-config\") pod \"1762eaac-ace3-46ce-996f-619ab0c4bdae\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.771564 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-svc\") pod \"1762eaac-ace3-46ce-996f-619ab0c4bdae\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.771663 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-sb\") pod \"1762eaac-ace3-46ce-996f-619ab0c4bdae\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.771766 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7hd6\" (UniqueName: \"kubernetes.io/projected/1762eaac-ace3-46ce-996f-619ab0c4bdae-kube-api-access-c7hd6\") pod \"1762eaac-ace3-46ce-996f-619ab0c4bdae\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.771796 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-nb\") pod \"1762eaac-ace3-46ce-996f-619ab0c4bdae\" (UID: \"1762eaac-ace3-46ce-996f-619ab0c4bdae\") " Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.786914 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1762eaac-ace3-46ce-996f-619ab0c4bdae-kube-api-access-c7hd6" (OuterVolumeSpecName: "kube-api-access-c7hd6") pod "1762eaac-ace3-46ce-996f-619ab0c4bdae" (UID: "1762eaac-ace3-46ce-996f-619ab0c4bdae"). InnerVolumeSpecName "kube-api-access-c7hd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.849728 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1762eaac-ace3-46ce-996f-619ab0c4bdae" (UID: "1762eaac-ace3-46ce-996f-619ab0c4bdae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.853862 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1762eaac-ace3-46ce-996f-619ab0c4bdae" (UID: "1762eaac-ace3-46ce-996f-619ab0c4bdae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.857207 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-config" (OuterVolumeSpecName: "config") pod "1762eaac-ace3-46ce-996f-619ab0c4bdae" (UID: "1762eaac-ace3-46ce-996f-619ab0c4bdae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.864223 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1762eaac-ace3-46ce-996f-619ab0c4bdae" (UID: "1762eaac-ace3-46ce-996f-619ab0c4bdae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.874665 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.874696 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.874713 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.874724 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.874736 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7hd6\" (UniqueName: \"kubernetes.io/projected/1762eaac-ace3-46ce-996f-619ab0c4bdae-kube-api-access-c7hd6\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.874983 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1762eaac-ace3-46ce-996f-619ab0c4bdae" (UID: "1762eaac-ace3-46ce-996f-619ab0c4bdae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:46:40 crc kubenswrapper[4992]: I1211 08:46:40.977815 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1762eaac-ace3-46ce-996f-619ab0c4bdae-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:41 crc kubenswrapper[4992]: I1211 08:46:41.689191 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-5bg7x" Dec 11 08:46:41 crc kubenswrapper[4992]: I1211 08:46:41.726372 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-5bg7x"] Dec 11 08:46:41 crc kubenswrapper[4992]: I1211 08:46:41.733774 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-5bg7x"] Dec 11 08:46:42 crc kubenswrapper[4992]: I1211 08:46:42.104643 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1762eaac-ace3-46ce-996f-619ab0c4bdae" path="/var/lib/kubelet/pods/1762eaac-ace3-46ce-996f-619ab0c4bdae/volumes" Dec 11 08:46:43 crc kubenswrapper[4992]: I1211 08:46:43.707003 4992 generic.go:334] "Generic (PLEG): container finished" podID="801312c0-357d-4900-aa2d-fabe849fb634" containerID="fa0279365301f7b1934bb5d759f88b31923e81cc44e07229edccbf0da87e7a39" exitCode=0 Dec 11 08:46:43 crc kubenswrapper[4992]: I1211 08:46:43.707097 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-549zl" event={"ID":"801312c0-357d-4900-aa2d-fabe849fb634","Type":"ContainerDied","Data":"fa0279365301f7b1934bb5d759f88b31923e81cc44e07229edccbf0da87e7a39"} Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.085353 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.250278 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-combined-ca-bundle\") pod \"801312c0-357d-4900-aa2d-fabe849fb634\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.250696 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-scripts\") pod \"801312c0-357d-4900-aa2d-fabe849fb634\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.250789 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz979\" (UniqueName: \"kubernetes.io/projected/801312c0-357d-4900-aa2d-fabe849fb634-kube-api-access-kz979\") pod \"801312c0-357d-4900-aa2d-fabe849fb634\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.250851 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-config-data\") pod \"801312c0-357d-4900-aa2d-fabe849fb634\" (UID: \"801312c0-357d-4900-aa2d-fabe849fb634\") " Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.256091 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801312c0-357d-4900-aa2d-fabe849fb634-kube-api-access-kz979" (OuterVolumeSpecName: "kube-api-access-kz979") pod "801312c0-357d-4900-aa2d-fabe849fb634" (UID: "801312c0-357d-4900-aa2d-fabe849fb634"). InnerVolumeSpecName "kube-api-access-kz979". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.256463 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-scripts" (OuterVolumeSpecName: "scripts") pod "801312c0-357d-4900-aa2d-fabe849fb634" (UID: "801312c0-357d-4900-aa2d-fabe849fb634"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.288366 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "801312c0-357d-4900-aa2d-fabe849fb634" (UID: "801312c0-357d-4900-aa2d-fabe849fb634"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.288685 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-config-data" (OuterVolumeSpecName: "config-data") pod "801312c0-357d-4900-aa2d-fabe849fb634" (UID: "801312c0-357d-4900-aa2d-fabe849fb634"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.353018 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.353055 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.353068 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz979\" (UniqueName: \"kubernetes.io/projected/801312c0-357d-4900-aa2d-fabe849fb634-kube-api-access-kz979\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.353081 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/801312c0-357d-4900-aa2d-fabe849fb634-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.726719 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-549zl" event={"ID":"801312c0-357d-4900-aa2d-fabe849fb634","Type":"ContainerDied","Data":"0f8d49b0ee6dae6ecf2a8324e235012277fbbf2fa34a34775d6fa3e54acf7f3c"} Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.726761 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f8d49b0ee6dae6ecf2a8324e235012277fbbf2fa34a34775d6fa3e54acf7f3c" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.726841 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-549zl" Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.903953 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.904262 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerName="nova-api-log" containerID="cri-o://dbcd517132b6a422470be54215f7a2abb11139786ef457ca137fdd9fa2d34001" gracePeriod=30 Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.904371 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerName="nova-api-api" containerID="cri-o://92180b1fe174c1b32678a1a5b282f097afedc611e34798df628dec1c6af4c619" gracePeriod=30 Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.935490 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.936098 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-log" containerID="cri-o://fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b" gracePeriod=30 Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.936612 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-metadata" containerID="cri-o://a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7" gracePeriod=30 Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.950014 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:45 crc kubenswrapper[4992]: I1211 08:46:45.950211 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="28c9189d-379d-4e21-9ae4-0b6df4951889" containerName="nova-scheduler-scheduler" containerID="cri-o://b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e" gracePeriod=30 Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.739786 4992 generic.go:334] "Generic (PLEG): container finished" podID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerID="fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b" exitCode=143 Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.739898 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d5c47ec-8f6a-4af9-b762-104573dc7a27","Type":"ContainerDied","Data":"fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b"} Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.742210 4992 generic.go:334] "Generic (PLEG): container finished" podID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerID="92180b1fe174c1b32678a1a5b282f097afedc611e34798df628dec1c6af4c619" exitCode=0 Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.742232 4992 generic.go:334] "Generic (PLEG): container finished" podID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerID="dbcd517132b6a422470be54215f7a2abb11139786ef457ca137fdd9fa2d34001" exitCode=143 Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.742247 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93f355cd-5fb9-48df-a991-9d5c7924709a","Type":"ContainerDied","Data":"92180b1fe174c1b32678a1a5b282f097afedc611e34798df628dec1c6af4c619"} Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.742261 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93f355cd-5fb9-48df-a991-9d5c7924709a","Type":"ContainerDied","Data":"dbcd517132b6a422470be54215f7a2abb11139786ef457ca137fdd9fa2d34001"} Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.742269 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"93f355cd-5fb9-48df-a991-9d5c7924709a","Type":"ContainerDied","Data":"466270a28bc061cbc908399ae0b94d701953383b4b7f4678adb5b4b9bc323c0e"} Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.742279 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="466270a28bc061cbc908399ae0b94d701953383b4b7f4678adb5b4b9bc323c0e" Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.825120 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.980530 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f355cd-5fb9-48df-a991-9d5c7924709a-logs\") pod \"93f355cd-5fb9-48df-a991-9d5c7924709a\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.980598 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-combined-ca-bundle\") pod \"93f355cd-5fb9-48df-a991-9d5c7924709a\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.980794 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-config-data\") pod \"93f355cd-5fb9-48df-a991-9d5c7924709a\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.980838 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8sjk\" (UniqueName: \"kubernetes.io/projected/93f355cd-5fb9-48df-a991-9d5c7924709a-kube-api-access-s8sjk\") pod \"93f355cd-5fb9-48df-a991-9d5c7924709a\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.980906 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-public-tls-certs\") pod \"93f355cd-5fb9-48df-a991-9d5c7924709a\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.981021 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-internal-tls-certs\") pod \"93f355cd-5fb9-48df-a991-9d5c7924709a\" (UID: \"93f355cd-5fb9-48df-a991-9d5c7924709a\") " Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.981450 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f355cd-5fb9-48df-a991-9d5c7924709a-logs" (OuterVolumeSpecName: "logs") pod "93f355cd-5fb9-48df-a991-9d5c7924709a" (UID: "93f355cd-5fb9-48df-a991-9d5c7924709a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:46:46 crc kubenswrapper[4992]: I1211 08:46:46.988369 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f355cd-5fb9-48df-a991-9d5c7924709a-kube-api-access-s8sjk" (OuterVolumeSpecName: "kube-api-access-s8sjk") pod "93f355cd-5fb9-48df-a991-9d5c7924709a" (UID: "93f355cd-5fb9-48df-a991-9d5c7924709a"). InnerVolumeSpecName "kube-api-access-s8sjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.010934 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-config-data" (OuterVolumeSpecName: "config-data") pod "93f355cd-5fb9-48df-a991-9d5c7924709a" (UID: "93f355cd-5fb9-48df-a991-9d5c7924709a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.014769 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93f355cd-5fb9-48df-a991-9d5c7924709a" (UID: "93f355cd-5fb9-48df-a991-9d5c7924709a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.037228 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "93f355cd-5fb9-48df-a991-9d5c7924709a" (UID: "93f355cd-5fb9-48df-a991-9d5c7924709a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.039661 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "93f355cd-5fb9-48df-a991-9d5c7924709a" (UID: "93f355cd-5fb9-48df-a991-9d5c7924709a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.084261 4992 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.084311 4992 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.084326 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93f355cd-5fb9-48df-a991-9d5c7924709a-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.084340 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.084353 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93f355cd-5fb9-48df-a991-9d5c7924709a-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.084366 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8sjk\" (UniqueName: \"kubernetes.io/projected/93f355cd-5fb9-48df-a991-9d5c7924709a-kube-api-access-s8sjk\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:47 crc kubenswrapper[4992]: E1211 08:46:47.605209 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 08:46:47 crc kubenswrapper[4992]: E1211 08:46:47.606813 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 08:46:47 crc kubenswrapper[4992]: E1211 08:46:47.608109 4992 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 08:46:47 crc kubenswrapper[4992]: E1211 08:46:47.608142 4992 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="28c9189d-379d-4e21-9ae4-0b6df4951889" containerName="nova-scheduler-scheduler" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.749701 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.793979 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.801750 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.825434 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:47 crc kubenswrapper[4992]: E1211 08:46:47.826066 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerName="nova-api-log" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.826092 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerName="nova-api-log" Dec 11 08:46:47 crc kubenswrapper[4992]: E1211 08:46:47.826109 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1762eaac-ace3-46ce-996f-619ab0c4bdae" containerName="dnsmasq-dns" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.826118 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="1762eaac-ace3-46ce-996f-619ab0c4bdae" containerName="dnsmasq-dns" Dec 11 08:46:47 crc kubenswrapper[4992]: E1211 08:46:47.826142 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801312c0-357d-4900-aa2d-fabe849fb634" containerName="nova-manage" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.826150 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="801312c0-357d-4900-aa2d-fabe849fb634" containerName="nova-manage" Dec 11 08:46:47 crc kubenswrapper[4992]: E1211 08:46:47.826169 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1762eaac-ace3-46ce-996f-619ab0c4bdae" containerName="init" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.826177 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="1762eaac-ace3-46ce-996f-619ab0c4bdae" containerName="init" Dec 11 08:46:47 crc kubenswrapper[4992]: E1211 08:46:47.826206 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerName="nova-api-api" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.826214 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerName="nova-api-api" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.826523 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="1762eaac-ace3-46ce-996f-619ab0c4bdae" containerName="dnsmasq-dns" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.826548 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="801312c0-357d-4900-aa2d-fabe849fb634" containerName="nova-manage" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.826577 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerName="nova-api-log" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.826593 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f355cd-5fb9-48df-a991-9d5c7924709a" containerName="nova-api-api" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.830932 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.833927 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.833939 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.834336 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 08:46:47 crc kubenswrapper[4992]: I1211 08:46:47.842105 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.066028 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-config-data\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.066347 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baafc0d4-8327-40d2-a00b-27c7388b64bf-logs\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.066411 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.066455 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj4x9\" (UniqueName: \"kubernetes.io/projected/baafc0d4-8327-40d2-a00b-27c7388b64bf-kube-api-access-kj4x9\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.066528 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.066549 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-public-tls-certs\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.107895 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f355cd-5fb9-48df-a991-9d5c7924709a" path="/var/lib/kubelet/pods/93f355cd-5fb9-48df-a991-9d5c7924709a/volumes" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.168455 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.168537 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj4x9\" (UniqueName: \"kubernetes.io/projected/baafc0d4-8327-40d2-a00b-27c7388b64bf-kube-api-access-kj4x9\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.168652 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.168684 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-public-tls-certs\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.168753 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-config-data\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.168775 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baafc0d4-8327-40d2-a00b-27c7388b64bf-logs\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.169170 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/baafc0d4-8327-40d2-a00b-27c7388b64bf-logs\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.173138 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.173876 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-public-tls-certs\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.174032 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.176213 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baafc0d4-8327-40d2-a00b-27c7388b64bf-config-data\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.187238 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj4x9\" (UniqueName: \"kubernetes.io/projected/baafc0d4-8327-40d2-a00b-27c7388b64bf-kube-api-access-kj4x9\") pod \"nova-api-0\" (UID: \"baafc0d4-8327-40d2-a00b-27c7388b64bf\") " pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.273681 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 08:46:48 crc kubenswrapper[4992]: I1211 08:46:48.771103 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 08:46:48 crc kubenswrapper[4992]: W1211 08:46:48.782301 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaafc0d4_8327_40d2_a00b_27c7388b64bf.slice/crio-3eddc6063a0c41b18c2370e8e4f61bd3493a329fb4676b649ca75e88a858d378 WatchSource:0}: Error finding container 3eddc6063a0c41b18c2370e8e4f61bd3493a329fb4676b649ca75e88a858d378: Status 404 returned error can't find the container with id 3eddc6063a0c41b18c2370e8e4f61bd3493a329fb4676b649ca75e88a858d378 Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.342033 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:52482->10.217.0.192:8775: read: connection reset by peer" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.342350 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": read tcp 10.217.0.2:52494->10.217.0.192:8775: read: connection reset by peer" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.705021 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.778714 4992 generic.go:334] "Generic (PLEG): container finished" podID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerID="a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7" exitCode=0 Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.778828 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.779573 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d5c47ec-8f6a-4af9-b762-104573dc7a27","Type":"ContainerDied","Data":"a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7"} Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.779677 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1d5c47ec-8f6a-4af9-b762-104573dc7a27","Type":"ContainerDied","Data":"cfe6420824a3474567bcec493247cd6fece9ec44600f318428041577e48e05b4"} Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.779738 4992 scope.go:117] "RemoveContainer" containerID="a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.783075 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"baafc0d4-8327-40d2-a00b-27c7388b64bf","Type":"ContainerStarted","Data":"d6886f623005b06903d2b1057daab399285d60f950ed35ed18dd26afc0718258"} Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.783118 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"baafc0d4-8327-40d2-a00b-27c7388b64bf","Type":"ContainerStarted","Data":"b3271c3d56afb71c4167a37258e5be1ebfdcbf83276d0631e2546cc360bd461b"} Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.783131 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"baafc0d4-8327-40d2-a00b-27c7388b64bf","Type":"ContainerStarted","Data":"3eddc6063a0c41b18c2370e8e4f61bd3493a329fb4676b649ca75e88a858d378"} Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.805117 4992 scope.go:117] "RemoveContainer" containerID="fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.807877 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.807860988 podStartE2EDuration="2.807860988s" podCreationTimestamp="2025-12-11 08:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:49.805437719 +0000 UTC m=+1434.064911645" watchObservedRunningTime="2025-12-11 08:46:49.807860988 +0000 UTC m=+1434.067334914" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.828454 4992 scope.go:117] "RemoveContainer" containerID="a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7" Dec 11 08:46:49 crc kubenswrapper[4992]: E1211 08:46:49.829834 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7\": container with ID starting with a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7 not found: ID does not exist" containerID="a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.829895 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7"} err="failed to get container status \"a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7\": rpc error: code = NotFound desc = could not find container \"a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7\": container with ID starting with a224ddd14cb36e489d660d35077769259aa26003a4367f7c6bff8a97990bafb7 not found: ID does not exist" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.829926 4992 scope.go:117] "RemoveContainer" containerID="fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b" Dec 11 08:46:49 crc kubenswrapper[4992]: E1211 08:46:49.830412 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b\": container with ID starting with fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b not found: ID does not exist" containerID="fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.830476 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b"} err="failed to get container status \"fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b\": rpc error: code = NotFound desc = could not find container \"fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b\": container with ID starting with fb784387f9747675db855ee5216ee8415bb0c905b498dd83071df9994acd687b not found: ID does not exist" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.898339 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-combined-ca-bundle\") pod \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.898486 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xkql\" (UniqueName: \"kubernetes.io/projected/1d5c47ec-8f6a-4af9-b762-104573dc7a27-kube-api-access-2xkql\") pod \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.898519 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-nova-metadata-tls-certs\") pod \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.898564 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5c47ec-8f6a-4af9-b762-104573dc7a27-logs\") pod \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.898615 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-config-data\") pod \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\" (UID: \"1d5c47ec-8f6a-4af9-b762-104573dc7a27\") " Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.900429 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d5c47ec-8f6a-4af9-b762-104573dc7a27-logs" (OuterVolumeSpecName: "logs") pod "1d5c47ec-8f6a-4af9-b762-104573dc7a27" (UID: "1d5c47ec-8f6a-4af9-b762-104573dc7a27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.903739 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d5c47ec-8f6a-4af9-b762-104573dc7a27-kube-api-access-2xkql" (OuterVolumeSpecName: "kube-api-access-2xkql") pod "1d5c47ec-8f6a-4af9-b762-104573dc7a27" (UID: "1d5c47ec-8f6a-4af9-b762-104573dc7a27"). InnerVolumeSpecName "kube-api-access-2xkql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.929774 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d5c47ec-8f6a-4af9-b762-104573dc7a27" (UID: "1d5c47ec-8f6a-4af9-b762-104573dc7a27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.933902 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-config-data" (OuterVolumeSpecName: "config-data") pod "1d5c47ec-8f6a-4af9-b762-104573dc7a27" (UID: "1d5c47ec-8f6a-4af9-b762-104573dc7a27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:49 crc kubenswrapper[4992]: I1211 08:46:49.968009 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1d5c47ec-8f6a-4af9-b762-104573dc7a27" (UID: "1d5c47ec-8f6a-4af9-b762-104573dc7a27"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.001458 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.001492 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.001508 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xkql\" (UniqueName: \"kubernetes.io/projected/1d5c47ec-8f6a-4af9-b762-104573dc7a27-kube-api-access-2xkql\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.001520 4992 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d5c47ec-8f6a-4af9-b762-104573dc7a27-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.001533 4992 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d5c47ec-8f6a-4af9-b762-104573dc7a27-logs\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.127811 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.143743 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.161320 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:50 crc kubenswrapper[4992]: E1211 08:46:50.161981 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-log" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.162021 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-log" Dec 11 08:46:50 crc kubenswrapper[4992]: E1211 08:46:50.162036 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-metadata" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.162042 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-metadata" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.162324 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-metadata" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.162571 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" containerName="nova-metadata-log" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.167360 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.172024 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.172290 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.178198 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.306717 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.306839 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-logs\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.306977 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-config-data\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.307041 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq7h4\" (UniqueName: \"kubernetes.io/projected/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-kube-api-access-fq7h4\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.307092 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.408177 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.408403 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.408541 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-logs\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.408683 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-config-data\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.408785 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq7h4\" (UniqueName: \"kubernetes.io/projected/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-kube-api-access-fq7h4\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.409459 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-logs\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.412904 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.413979 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-config-data\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.414361 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.425401 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq7h4\" (UniqueName: \"kubernetes.io/projected/cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e-kube-api-access-fq7h4\") pod \"nova-metadata-0\" (UID: \"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e\") " pod="openstack/nova-metadata-0" Dec 11 08:46:50 crc kubenswrapper[4992]: I1211 08:46:50.594351 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.041083 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 08:46:51 crc kubenswrapper[4992]: W1211 08:46:51.051216 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc6d0cc7_fab7_4c05_b634_d8ad75d7e89e.slice/crio-654c01ea8c3c6005ec30fee9f74b7799f17a5e702c17134bfb7ab8358b5c0881 WatchSource:0}: Error finding container 654c01ea8c3c6005ec30fee9f74b7799f17a5e702c17134bfb7ab8358b5c0881: Status 404 returned error can't find the container with id 654c01ea8c3c6005ec30fee9f74b7799f17a5e702c17134bfb7ab8358b5c0881 Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.695744 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.806156 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e","Type":"ContainerStarted","Data":"c1d5e18242222f234901f09d2d8b3c45bda120c677b2bb97d25ceb1832453ea8"} Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.806467 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e","Type":"ContainerStarted","Data":"26a4afd2c775d5ceebf07b934e73fef918c926147809557c0dc9d736efcb72fc"} Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.806486 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e","Type":"ContainerStarted","Data":"654c01ea8c3c6005ec30fee9f74b7799f17a5e702c17134bfb7ab8358b5c0881"} Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.816067 4992 generic.go:334] "Generic (PLEG): container finished" podID="28c9189d-379d-4e21-9ae4-0b6df4951889" containerID="b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e" exitCode=0 Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.816117 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28c9189d-379d-4e21-9ae4-0b6df4951889","Type":"ContainerDied","Data":"b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e"} Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.816142 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"28c9189d-379d-4e21-9ae4-0b6df4951889","Type":"ContainerDied","Data":"8d5310ac099f49449b930e2f2a6255692ae8232372ef5bdfb11ee398c9e18f25"} Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.816158 4992 scope.go:117] "RemoveContainer" containerID="b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.816299 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.832176 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.832156501 podStartE2EDuration="1.832156501s" podCreationTimestamp="2025-12-11 08:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:51.822298082 +0000 UTC m=+1436.081772008" watchObservedRunningTime="2025-12-11 08:46:51.832156501 +0000 UTC m=+1436.091630427" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.842294 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-combined-ca-bundle\") pod \"28c9189d-379d-4e21-9ae4-0b6df4951889\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.842450 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-config-data\") pod \"28c9189d-379d-4e21-9ae4-0b6df4951889\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.842656 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zczg8\" (UniqueName: \"kubernetes.io/projected/28c9189d-379d-4e21-9ae4-0b6df4951889-kube-api-access-zczg8\") pod \"28c9189d-379d-4e21-9ae4-0b6df4951889\" (UID: \"28c9189d-379d-4e21-9ae4-0b6df4951889\") " Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.846861 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c9189d-379d-4e21-9ae4-0b6df4951889-kube-api-access-zczg8" (OuterVolumeSpecName: "kube-api-access-zczg8") pod "28c9189d-379d-4e21-9ae4-0b6df4951889" (UID: "28c9189d-379d-4e21-9ae4-0b6df4951889"). InnerVolumeSpecName "kube-api-access-zczg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.856242 4992 scope.go:117] "RemoveContainer" containerID="b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e" Dec 11 08:46:51 crc kubenswrapper[4992]: E1211 08:46:51.858298 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e\": container with ID starting with b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e not found: ID does not exist" containerID="b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.858367 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e"} err="failed to get container status \"b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e\": rpc error: code = NotFound desc = could not find container \"b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e\": container with ID starting with b5a9069467270f00bc44186312290d0dcd33f0116c19b229ebcb7ef1c9ab2e0e not found: ID does not exist" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.872381 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28c9189d-379d-4e21-9ae4-0b6df4951889" (UID: "28c9189d-379d-4e21-9ae4-0b6df4951889"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.877338 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-config-data" (OuterVolumeSpecName: "config-data") pod "28c9189d-379d-4e21-9ae4-0b6df4951889" (UID: "28c9189d-379d-4e21-9ae4-0b6df4951889"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.946145 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zczg8\" (UniqueName: \"kubernetes.io/projected/28c9189d-379d-4e21-9ae4-0b6df4951889-kube-api-access-zczg8\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.946322 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:51 crc kubenswrapper[4992]: I1211 08:46:51.946348 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c9189d-379d-4e21-9ae4-0b6df4951889-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.105389 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d5c47ec-8f6a-4af9-b762-104573dc7a27" path="/var/lib/kubelet/pods/1d5c47ec-8f6a-4af9-b762-104573dc7a27/volumes" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.149108 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.159247 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.169320 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:52 crc kubenswrapper[4992]: E1211 08:46:52.169739 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c9189d-379d-4e21-9ae4-0b6df4951889" containerName="nova-scheduler-scheduler" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.169759 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c9189d-379d-4e21-9ae4-0b6df4951889" containerName="nova-scheduler-scheduler" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.169969 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c9189d-379d-4e21-9ae4-0b6df4951889" containerName="nova-scheduler-scheduler" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.170678 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.178892 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.179858 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.357206 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd2254a-360d-4e10-8185-12ef58a09c9b-config-data\") pod \"nova-scheduler-0\" (UID: \"5cd2254a-360d-4e10-8185-12ef58a09c9b\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.357288 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd2254a-360d-4e10-8185-12ef58a09c9b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5cd2254a-360d-4e10-8185-12ef58a09c9b\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.357576 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbwb6\" (UniqueName: \"kubernetes.io/projected/5cd2254a-360d-4e10-8185-12ef58a09c9b-kube-api-access-wbwb6\") pod \"nova-scheduler-0\" (UID: \"5cd2254a-360d-4e10-8185-12ef58a09c9b\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.459876 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd2254a-360d-4e10-8185-12ef58a09c9b-config-data\") pod \"nova-scheduler-0\" (UID: \"5cd2254a-360d-4e10-8185-12ef58a09c9b\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.460014 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd2254a-360d-4e10-8185-12ef58a09c9b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5cd2254a-360d-4e10-8185-12ef58a09c9b\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.460152 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbwb6\" (UniqueName: \"kubernetes.io/projected/5cd2254a-360d-4e10-8185-12ef58a09c9b-kube-api-access-wbwb6\") pod \"nova-scheduler-0\" (UID: \"5cd2254a-360d-4e10-8185-12ef58a09c9b\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.465993 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cd2254a-360d-4e10-8185-12ef58a09c9b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5cd2254a-360d-4e10-8185-12ef58a09c9b\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.466319 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cd2254a-360d-4e10-8185-12ef58a09c9b-config-data\") pod \"nova-scheduler-0\" (UID: \"5cd2254a-360d-4e10-8185-12ef58a09c9b\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.483285 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbwb6\" (UniqueName: \"kubernetes.io/projected/5cd2254a-360d-4e10-8185-12ef58a09c9b-kube-api-access-wbwb6\") pod \"nova-scheduler-0\" (UID: \"5cd2254a-360d-4e10-8185-12ef58a09c9b\") " pod="openstack/nova-scheduler-0" Dec 11 08:46:52 crc kubenswrapper[4992]: I1211 08:46:52.542764 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 08:46:53 crc kubenswrapper[4992]: I1211 08:46:53.018283 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 08:46:53 crc kubenswrapper[4992]: W1211 08:46:53.021907 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cd2254a_360d_4e10_8185_12ef58a09c9b.slice/crio-bf243a06fda34565897c5d7733157e17bf6da33c19631ea3552255377ee024d7 WatchSource:0}: Error finding container bf243a06fda34565897c5d7733157e17bf6da33c19631ea3552255377ee024d7: Status 404 returned error can't find the container with id bf243a06fda34565897c5d7733157e17bf6da33c19631ea3552255377ee024d7 Dec 11 08:46:53 crc kubenswrapper[4992]: I1211 08:46:53.836663 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5cd2254a-360d-4e10-8185-12ef58a09c9b","Type":"ContainerStarted","Data":"90760e0ee1f3b0281706ce9808fd31240e94666c2f1db5107ed3127f16b10823"} Dec 11 08:46:53 crc kubenswrapper[4992]: I1211 08:46:53.837228 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5cd2254a-360d-4e10-8185-12ef58a09c9b","Type":"ContainerStarted","Data":"bf243a06fda34565897c5d7733157e17bf6da33c19631ea3552255377ee024d7"} Dec 11 08:46:53 crc kubenswrapper[4992]: I1211 08:46:53.860825 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.860806489 podStartE2EDuration="1.860806489s" podCreationTimestamp="2025-12-11 08:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:46:53.858848402 +0000 UTC m=+1438.118322378" watchObservedRunningTime="2025-12-11 08:46:53.860806489 +0000 UTC m=+1438.120280435" Dec 11 08:46:54 crc kubenswrapper[4992]: I1211 08:46:54.108586 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c9189d-379d-4e21-9ae4-0b6df4951889" path="/var/lib/kubelet/pods/28c9189d-379d-4e21-9ae4-0b6df4951889/volumes" Dec 11 08:46:55 crc kubenswrapper[4992]: I1211 08:46:55.595330 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 08:46:55 crc kubenswrapper[4992]: I1211 08:46:55.595906 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 08:46:57 crc kubenswrapper[4992]: I1211 08:46:57.543665 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 08:46:58 crc kubenswrapper[4992]: I1211 08:46:58.190451 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 11 08:46:58 crc kubenswrapper[4992]: I1211 08:46:58.273900 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 08:46:58 crc kubenswrapper[4992]: I1211 08:46:58.273960 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 08:46:59 crc kubenswrapper[4992]: I1211 08:46:59.280936 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="baafc0d4-8327-40d2-a00b-27c7388b64bf" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 08:46:59 crc kubenswrapper[4992]: I1211 08:46:59.280953 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="baafc0d4-8327-40d2-a00b-27c7388b64bf" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 08:47:00 crc kubenswrapper[4992]: I1211 08:47:00.594992 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 08:47:00 crc kubenswrapper[4992]: I1211 08:47:00.595063 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 08:47:01 crc kubenswrapper[4992]: I1211 08:47:01.611783 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 08:47:01 crc kubenswrapper[4992]: I1211 08:47:01.611808 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.543484 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.593761 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.933993 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.934319 4992 generic.go:334] "Generic (PLEG): container finished" podID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerID="cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1" exitCode=137 Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.934394 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerDied","Data":"cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1"} Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.934447 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"29ba6ac6-907c-45c1-98b6-ee952adb74b1","Type":"ContainerDied","Data":"66b4f46c8866ef4fe517d95221436a2947d23cb41dd3c2b9a92fd9519851d113"} Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.934467 4992 scope.go:117] "RemoveContainer" containerID="cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1" Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.966084 4992 scope.go:117] "RemoveContainer" containerID="319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646" Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.982799 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 08:47:02 crc kubenswrapper[4992]: I1211 08:47:02.987209 4992 scope.go:117] "RemoveContainer" containerID="f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.009121 4992 scope.go:117] "RemoveContainer" containerID="cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.029949 4992 scope.go:117] "RemoveContainer" containerID="cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1" Dec 11 08:47:03 crc kubenswrapper[4992]: E1211 08:47:03.030431 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1\": container with ID starting with cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1 not found: ID does not exist" containerID="cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.030463 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1"} err="failed to get container status \"cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1\": rpc error: code = NotFound desc = could not find container \"cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1\": container with ID starting with cd8387381cc6af4a69ea90e651042d6667bb14cf8e426ea551f69f1d8a4c01f1 not found: ID does not exist" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.030483 4992 scope.go:117] "RemoveContainer" containerID="319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646" Dec 11 08:47:03 crc kubenswrapper[4992]: E1211 08:47:03.030792 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646\": container with ID starting with 319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646 not found: ID does not exist" containerID="319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.030811 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646"} err="failed to get container status \"319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646\": rpc error: code = NotFound desc = could not find container \"319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646\": container with ID starting with 319e13541d3744e2ba60dbd7b1ec5ace867bfd8eed04887f68a97703956a2646 not found: ID does not exist" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.030822 4992 scope.go:117] "RemoveContainer" containerID="f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da" Dec 11 08:47:03 crc kubenswrapper[4992]: E1211 08:47:03.031112 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da\": container with ID starting with f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da not found: ID does not exist" containerID="f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.031151 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da"} err="failed to get container status \"f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da\": rpc error: code = NotFound desc = could not find container \"f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da\": container with ID starting with f7099f06e4b812561e850896bfbe80bce478de456905d82c88b28641e12c65da not found: ID does not exist" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.031175 4992 scope.go:117] "RemoveContainer" containerID="cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3" Dec 11 08:47:03 crc kubenswrapper[4992]: E1211 08:47:03.031414 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3\": container with ID starting with cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3 not found: ID does not exist" containerID="cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.031437 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3"} err="failed to get container status \"cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3\": rpc error: code = NotFound desc = could not find container \"cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3\": container with ID starting with cd863baf3c46084db2dacb6ec0d2d59f47790fc2e46d1f2c7e95a7ce1071f4b3 not found: ID does not exist" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.102917 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-ceilometer-tls-certs\") pod \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.103223 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-sg-core-conf-yaml\") pod \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.103274 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-run-httpd\") pod \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.103323 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-combined-ca-bundle\") pod \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.103344 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74w44\" (UniqueName: \"kubernetes.io/projected/29ba6ac6-907c-45c1-98b6-ee952adb74b1-kube-api-access-74w44\") pod \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.103413 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-config-data\") pod \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.103437 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-log-httpd\") pod \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.103829 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-scripts\") pod \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\" (UID: \"29ba6ac6-907c-45c1-98b6-ee952adb74b1\") " Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.104232 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "29ba6ac6-907c-45c1-98b6-ee952adb74b1" (UID: "29ba6ac6-907c-45c1-98b6-ee952adb74b1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.104258 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "29ba6ac6-907c-45c1-98b6-ee952adb74b1" (UID: "29ba6ac6-907c-45c1-98b6-ee952adb74b1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.108431 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ba6ac6-907c-45c1-98b6-ee952adb74b1-kube-api-access-74w44" (OuterVolumeSpecName: "kube-api-access-74w44") pod "29ba6ac6-907c-45c1-98b6-ee952adb74b1" (UID: "29ba6ac6-907c-45c1-98b6-ee952adb74b1"). InnerVolumeSpecName "kube-api-access-74w44". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.109831 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-scripts" (OuterVolumeSpecName: "scripts") pod "29ba6ac6-907c-45c1-98b6-ee952adb74b1" (UID: "29ba6ac6-907c-45c1-98b6-ee952adb74b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.138298 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "29ba6ac6-907c-45c1-98b6-ee952adb74b1" (UID: "29ba6ac6-907c-45c1-98b6-ee952adb74b1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.171069 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "29ba6ac6-907c-45c1-98b6-ee952adb74b1" (UID: "29ba6ac6-907c-45c1-98b6-ee952adb74b1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.205590 4992 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.205622 4992 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.206186 4992 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.206208 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74w44\" (UniqueName: \"kubernetes.io/projected/29ba6ac6-907c-45c1-98b6-ee952adb74b1-kube-api-access-74w44\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.206225 4992 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/29ba6ac6-907c-45c1-98b6-ee952adb74b1-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.206236 4992 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.209385 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29ba6ac6-907c-45c1-98b6-ee952adb74b1" (UID: "29ba6ac6-907c-45c1-98b6-ee952adb74b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.212527 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-config-data" (OuterVolumeSpecName: "config-data") pod "29ba6ac6-907c-45c1-98b6-ee952adb74b1" (UID: "29ba6ac6-907c-45c1-98b6-ee952adb74b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.307837 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.307873 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29ba6ac6-907c-45c1-98b6-ee952adb74b1-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.944266 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:47:03 crc kubenswrapper[4992]: I1211 08:47:03.994864 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.020726 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.027028 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:47:04 crc kubenswrapper[4992]: E1211 08:47:04.027479 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="ceilometer-notification-agent" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.027504 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="ceilometer-notification-agent" Dec 11 08:47:04 crc kubenswrapper[4992]: E1211 08:47:04.027522 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="sg-core" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.027529 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="sg-core" Dec 11 08:47:04 crc kubenswrapper[4992]: E1211 08:47:04.027553 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="proxy-httpd" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.027561 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="proxy-httpd" Dec 11 08:47:04 crc kubenswrapper[4992]: E1211 08:47:04.027595 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="ceilometer-central-agent" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.027603 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="ceilometer-central-agent" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.027829 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="proxy-httpd" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.027855 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="ceilometer-notification-agent" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.027872 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="sg-core" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.027889 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" containerName="ceilometer-central-agent" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.030105 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.032694 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.033048 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.033310 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.043885 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.118824 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ba6ac6-907c-45c1-98b6-ee952adb74b1" path="/var/lib/kubelet/pods/29ba6ac6-907c-45c1-98b6-ee952adb74b1/volumes" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.124406 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.124461 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-scripts\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.124486 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.124541 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c569a72-7d96-4212-b681-f0d5a0c19c61-run-httpd\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.125216 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c569a72-7d96-4212-b681-f0d5a0c19c61-log-httpd\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.125249 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94w4\" (UniqueName: \"kubernetes.io/projected/2c569a72-7d96-4212-b681-f0d5a0c19c61-kube-api-access-k94w4\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.125552 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.125780 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-config-data\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.229192 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94w4\" (UniqueName: \"kubernetes.io/projected/2c569a72-7d96-4212-b681-f0d5a0c19c61-kube-api-access-k94w4\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.229355 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.229500 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-config-data\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.229558 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.229593 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-scripts\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.229617 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.229680 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c569a72-7d96-4212-b681-f0d5a0c19c61-run-httpd\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.229724 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c569a72-7d96-4212-b681-f0d5a0c19c61-log-httpd\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.230206 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c569a72-7d96-4212-b681-f0d5a0c19c61-log-httpd\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.232488 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c569a72-7d96-4212-b681-f0d5a0c19c61-run-httpd\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.236754 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.247987 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.248679 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-scripts\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.249097 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-config-data\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.262921 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c569a72-7d96-4212-b681-f0d5a0c19c61-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.265573 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94w4\" (UniqueName: \"kubernetes.io/projected/2c569a72-7d96-4212-b681-f0d5a0c19c61-kube-api-access-k94w4\") pod \"ceilometer-0\" (UID: \"2c569a72-7d96-4212-b681-f0d5a0c19c61\") " pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.347590 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.813148 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 08:47:04 crc kubenswrapper[4992]: I1211 08:47:04.954931 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c569a72-7d96-4212-b681-f0d5a0c19c61","Type":"ContainerStarted","Data":"56bb2438797f931411c0b350023705b309349b6f5587cc3d23cebb54163d1d7f"} Dec 11 08:47:05 crc kubenswrapper[4992]: I1211 08:47:05.966353 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c569a72-7d96-4212-b681-f0d5a0c19c61","Type":"ContainerStarted","Data":"178de973dd7ace5576a289507a6b60369b07d6f6dd8cb44c49c7c539aaebce24"} Dec 11 08:47:06 crc kubenswrapper[4992]: I1211 08:47:06.975872 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c569a72-7d96-4212-b681-f0d5a0c19c61","Type":"ContainerStarted","Data":"d7d256128e5f550cdd0c9dfec3463ff50f6e3eb835c7ee2867d8e35819d83c6d"} Dec 11 08:47:07 crc kubenswrapper[4992]: I1211 08:47:07.984968 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c569a72-7d96-4212-b681-f0d5a0c19c61","Type":"ContainerStarted","Data":"011e5e78befb70e0e8cfa890fb26a8a4de2fe3fa84dbd44eb06869e8f767dcbf"} Dec 11 08:47:08 crc kubenswrapper[4992]: I1211 08:47:08.284298 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 08:47:08 crc kubenswrapper[4992]: I1211 08:47:08.285231 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 08:47:08 crc kubenswrapper[4992]: I1211 08:47:08.286717 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 08:47:08 crc kubenswrapper[4992]: I1211 08:47:08.292931 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 08:47:08 crc kubenswrapper[4992]: I1211 08:47:08.999616 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c569a72-7d96-4212-b681-f0d5a0c19c61","Type":"ContainerStarted","Data":"6f8b1f5d3b2f41d9655595d91bbf34faa9140d6b4b4b6effdc940c2ab0a5d0a9"} Dec 11 08:47:08 crc kubenswrapper[4992]: I1211 08:47:08.999957 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 08:47:09 crc kubenswrapper[4992]: I1211 08:47:08.999983 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 08:47:09 crc kubenswrapper[4992]: I1211 08:47:09.005604 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 08:47:09 crc kubenswrapper[4992]: I1211 08:47:09.024822 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.4947144479999999 podStartE2EDuration="5.024803753s" podCreationTimestamp="2025-12-11 08:47:04 +0000 UTC" firstStartedPulling="2025-12-11 08:47:04.812373049 +0000 UTC m=+1449.071846965" lastFinishedPulling="2025-12-11 08:47:08.342462344 +0000 UTC m=+1452.601936270" observedRunningTime="2025-12-11 08:47:09.016880771 +0000 UTC m=+1453.276354697" watchObservedRunningTime="2025-12-11 08:47:09.024803753 +0000 UTC m=+1453.284277679" Dec 11 08:47:10 crc kubenswrapper[4992]: I1211 08:47:10.598503 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 08:47:10 crc kubenswrapper[4992]: I1211 08:47:10.601277 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 08:47:10 crc kubenswrapper[4992]: I1211 08:47:10.603157 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 08:47:11 crc kubenswrapper[4992]: I1211 08:47:11.039764 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 08:47:34 crc kubenswrapper[4992]: I1211 08:47:34.355193 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 08:47:43 crc kubenswrapper[4992]: I1211 08:47:43.943759 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 08:47:44 crc kubenswrapper[4992]: I1211 08:47:44.732081 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 08:47:48 crc kubenswrapper[4992]: I1211 08:47:48.118395 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="72f9411b-61f4-4615-8653-5f90b629690d" containerName="rabbitmq" containerID="cri-o://25983fdca35d0058db378bc88fb37a396b0fb7cf9b358a0d61704c6b9388d9fc" gracePeriod=604796 Dec 11 08:47:48 crc kubenswrapper[4992]: I1211 08:47:48.873395 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" containerName="rabbitmq" containerID="cri-o://aa5295d6e2f7f8f85b1f37929a049b5cb2e5c256a729ae48c02399dada605cb2" gracePeriod=604796 Dec 11 08:47:54 crc kubenswrapper[4992]: I1211 08:47:54.418863 4992 generic.go:334] "Generic (PLEG): container finished" podID="72f9411b-61f4-4615-8653-5f90b629690d" containerID="25983fdca35d0058db378bc88fb37a396b0fb7cf9b358a0d61704c6b9388d9fc" exitCode=0 Dec 11 08:47:54 crc kubenswrapper[4992]: I1211 08:47:54.418926 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72f9411b-61f4-4615-8653-5f90b629690d","Type":"ContainerDied","Data":"25983fdca35d0058db378bc88fb37a396b0fb7cf9b358a0d61704c6b9388d9fc"} Dec 11 08:47:54 crc kubenswrapper[4992]: I1211 08:47:54.871282 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037162 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-confd\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037435 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-erlang-cookie\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037469 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-tls\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037487 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-config-data\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037524 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037572 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-server-conf\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037611 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmbzl\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-kube-api-access-wmbzl\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037704 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-plugins-conf\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037724 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-plugins\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037783 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72f9411b-61f4-4615-8653-5f90b629690d-pod-info\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.037834 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72f9411b-61f4-4615-8653-5f90b629690d-erlang-cookie-secret\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.042269 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.059686 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.060314 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.068959 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72f9411b-61f4-4615-8653-5f90b629690d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.079944 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.081765 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.081905 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-kube-api-access-wmbzl" (OuterVolumeSpecName: "kube-api-access-wmbzl") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "kube-api-access-wmbzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.104805 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/72f9411b-61f4-4615-8653-5f90b629690d-pod-info" (OuterVolumeSpecName: "pod-info") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.136934 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-config-data" (OuterVolumeSpecName: "config-data") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.148875 4992 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72f9411b-61f4-4615-8653-5f90b629690d-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.148912 4992 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72f9411b-61f4-4615-8653-5f90b629690d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.148924 4992 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.148933 4992 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.148941 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.148963 4992 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.148972 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmbzl\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-kube-api-access-wmbzl\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.148980 4992 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.148989 4992 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.210101 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-server-conf" (OuterVolumeSpecName: "server-conf") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.220035 4992 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.249814 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.250392 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-confd\") pod \"72f9411b-61f4-4615-8653-5f90b629690d\" (UID: \"72f9411b-61f4-4615-8653-5f90b629690d\") " Dec 11 08:47:55 crc kubenswrapper[4992]: W1211 08:47:55.250499 4992 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/72f9411b-61f4-4615-8653-5f90b629690d/volumes/kubernetes.io~projected/rabbitmq-confd Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.250553 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "72f9411b-61f4-4615-8653-5f90b629690d" (UID: "72f9411b-61f4-4615-8653-5f90b629690d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.250802 4992 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72f9411b-61f4-4615-8653-5f90b629690d-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.250819 4992 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72f9411b-61f4-4615-8653-5f90b629690d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.251017 4992 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.328727 4992 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.430942 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72f9411b-61f4-4615-8653-5f90b629690d","Type":"ContainerDied","Data":"e06ef21aa6cecd1bbd3da957f5db0737eaf9090187ec504a1c9ed94acdac9096"} Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.430994 4992 scope.go:117] "RemoveContainer" containerID="25983fdca35d0058db378bc88fb37a396b0fb7cf9b358a0d61704c6b9388d9fc" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.431107 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.447154 4992 generic.go:334] "Generic (PLEG): container finished" podID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" containerID="aa5295d6e2f7f8f85b1f37929a049b5cb2e5c256a729ae48c02399dada605cb2" exitCode=0 Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.447199 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a0fa5ac-9268-4db9-8e40-42aca5111af9","Type":"ContainerDied","Data":"aa5295d6e2f7f8f85b1f37929a049b5cb2e5c256a729ae48c02399dada605cb2"} Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.467620 4992 scope.go:117] "RemoveContainer" containerID="daa8d142bc905225839350387f54d5d85e7e63e3eae6f27da2901a157fc2ea72" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.470716 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.482091 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.502361 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 08:47:55 crc kubenswrapper[4992]: E1211 08:47:55.503409 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f9411b-61f4-4615-8653-5f90b629690d" containerName="setup-container" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.503431 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f9411b-61f4-4615-8653-5f90b629690d" containerName="setup-container" Dec 11 08:47:55 crc kubenswrapper[4992]: E1211 08:47:55.503459 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72f9411b-61f4-4615-8653-5f90b629690d" containerName="rabbitmq" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.503466 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="72f9411b-61f4-4615-8653-5f90b629690d" containerName="rabbitmq" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.503671 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="72f9411b-61f4-4615-8653-5f90b629690d" containerName="rabbitmq" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.504607 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.507282 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.507337 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.507484 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.507700 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.507737 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.507921 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.510999 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-twq5q" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.549924 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657281 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/614cd874-917b-4851-b702-cfb170fcec4d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657526 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/614cd874-917b-4851-b702-cfb170fcec4d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657553 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657592 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657616 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/614cd874-917b-4851-b702-cfb170fcec4d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657648 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/614cd874-917b-4851-b702-cfb170fcec4d-config-data\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657673 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657715 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/614cd874-917b-4851-b702-cfb170fcec4d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657737 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657757 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhd8\" (UniqueName: \"kubernetes.io/projected/614cd874-917b-4851-b702-cfb170fcec4d-kube-api-access-glhd8\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.657782 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759322 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/614cd874-917b-4851-b702-cfb170fcec4d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759383 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/614cd874-917b-4851-b702-cfb170fcec4d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759420 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759471 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759499 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/614cd874-917b-4851-b702-cfb170fcec4d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759524 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/614cd874-917b-4851-b702-cfb170fcec4d-config-data\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759558 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759613 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/614cd874-917b-4851-b702-cfb170fcec4d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759686 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759725 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glhd8\" (UniqueName: \"kubernetes.io/projected/614cd874-917b-4851-b702-cfb170fcec4d-kube-api-access-glhd8\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.759823 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.760396 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.760436 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.760801 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/614cd874-917b-4851-b702-cfb170fcec4d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.761060 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/614cd874-917b-4851-b702-cfb170fcec4d-config-data\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.761422 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/614cd874-917b-4851-b702-cfb170fcec4d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.765558 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.765721 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/614cd874-917b-4851-b702-cfb170fcec4d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.765848 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.766214 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/614cd874-917b-4851-b702-cfb170fcec4d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.767303 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/614cd874-917b-4851-b702-cfb170fcec4d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.779089 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glhd8\" (UniqueName: \"kubernetes.io/projected/614cd874-917b-4851-b702-cfb170fcec4d-kube-api-access-glhd8\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.805767 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"614cd874-917b-4851-b702-cfb170fcec4d\") " pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.845858 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.855413 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978304 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt6hf\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-kube-api-access-wt6hf\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978612 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a0fa5ac-9268-4db9-8e40-42aca5111af9-erlang-cookie-secret\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978710 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-erlang-cookie\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978733 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-plugins\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978770 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-confd\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978792 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-tls\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978810 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-plugins-conf\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978832 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a0fa5ac-9268-4db9-8e40-42aca5111af9-pod-info\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978855 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-server-conf\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978898 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.978920 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-config-data\") pod \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\" (UID: \"7a0fa5ac-9268-4db9-8e40-42aca5111af9\") " Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.980956 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.983338 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a0fa5ac-9268-4db9-8e40-42aca5111af9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.983924 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.984168 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-kube-api-access-wt6hf" (OuterVolumeSpecName: "kube-api-access-wt6hf") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "kube-api-access-wt6hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.990011 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7a0fa5ac-9268-4db9-8e40-42aca5111af9-pod-info" (OuterVolumeSpecName: "pod-info") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.991939 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:47:55 crc kubenswrapper[4992]: I1211 08:47:55.994276 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.005276 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.017461 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-config-data" (OuterVolumeSpecName: "config-data") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.052824 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-server-conf" (OuterVolumeSpecName: "server-conf") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082698 4992 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082738 4992 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082752 4992 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082762 4992 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082770 4992 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7a0fa5ac-9268-4db9-8e40-42aca5111af9-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082780 4992 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082809 4992 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082819 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7a0fa5ac-9268-4db9-8e40-42aca5111af9-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082828 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt6hf\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-kube-api-access-wt6hf\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.082837 4992 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7a0fa5ac-9268-4db9-8e40-42aca5111af9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.109384 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7a0fa5ac-9268-4db9-8e40-42aca5111af9" (UID: "7a0fa5ac-9268-4db9-8e40-42aca5111af9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.117805 4992 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.117825 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72f9411b-61f4-4615-8653-5f90b629690d" path="/var/lib/kubelet/pods/72f9411b-61f4-4615-8653-5f90b629690d/volumes" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.184903 4992 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.184928 4992 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7a0fa5ac-9268-4db9-8e40-42aca5111af9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.411043 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.458335 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7a0fa5ac-9268-4db9-8e40-42aca5111af9","Type":"ContainerDied","Data":"3a924922ce8a768d65894f79c30b24e8a3825e36ede065cb63188abf2cbc1543"} Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.458388 4992 scope.go:117] "RemoveContainer" containerID="aa5295d6e2f7f8f85b1f37929a049b5cb2e5c256a729ae48c02399dada605cb2" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.458519 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.462979 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"614cd874-917b-4851-b702-cfb170fcec4d","Type":"ContainerStarted","Data":"e7605b22e21c9fa3682bbb7c7e998f0995cd94cc1f4024044ca67917195f0efc"} Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.568507 4992 scope.go:117] "RemoveContainer" containerID="259359c8e6c2d1faf101b6f5d0f1887b1de6ea4e766412b2cf6cbd8f9fc64fb4" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.585006 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.600100 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.626797 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 08:47:56 crc kubenswrapper[4992]: E1211 08:47:56.627266 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" containerName="rabbitmq" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.627286 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" containerName="rabbitmq" Dec 11 08:47:56 crc kubenswrapper[4992]: E1211 08:47:56.627335 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" containerName="setup-container" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.627344 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" containerName="setup-container" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.627546 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" containerName="rabbitmq" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.628760 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.630773 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.631222 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-kg67r" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.631405 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.631547 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.631700 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.631965 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.632124 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.651383 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.795440 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b10485db-da0e-493a-ad33-82634346be84-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.795805 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.795834 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.795851 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b10485db-da0e-493a-ad33-82634346be84-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.795868 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b10485db-da0e-493a-ad33-82634346be84-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.796058 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b10485db-da0e-493a-ad33-82634346be84-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.796307 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b10485db-da0e-493a-ad33-82634346be84-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.796368 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.796394 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cspdp\" (UniqueName: \"kubernetes.io/projected/b10485db-da0e-493a-ad33-82634346be84-kube-api-access-cspdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.796426 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.796574 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.898406 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.898455 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b10485db-da0e-493a-ad33-82634346be84-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.898721 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.898773 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.898791 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b10485db-da0e-493a-ad33-82634346be84-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.898804 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b10485db-da0e-493a-ad33-82634346be84-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.899042 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.899468 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b10485db-da0e-493a-ad33-82634346be84-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.899672 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b10485db-da0e-493a-ad33-82634346be84-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.899713 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b10485db-da0e-493a-ad33-82634346be84-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.899751 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cspdp\" (UniqueName: \"kubernetes.io/projected/b10485db-da0e-493a-ad33-82634346be84-kube-api-access-cspdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.899772 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.899806 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.900279 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.900396 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.900493 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b10485db-da0e-493a-ad33-82634346be84-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.901377 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b10485db-da0e-493a-ad33-82634346be84-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.902628 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.902770 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b10485db-da0e-493a-ad33-82634346be84-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.902853 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b10485db-da0e-493a-ad33-82634346be84-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.903311 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b10485db-da0e-493a-ad33-82634346be84-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.922005 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cspdp\" (UniqueName: \"kubernetes.io/projected/b10485db-da0e-493a-ad33-82634346be84-kube-api-access-cspdp\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.936868 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b10485db-da0e-493a-ad33-82634346be84\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:56 crc kubenswrapper[4992]: I1211 08:47:56.958852 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:47:57 crc kubenswrapper[4992]: I1211 08:47:57.410513 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.109387 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a0fa5ac-9268-4db9-8e40-42aca5111af9" path="/var/lib/kubelet/pods/7a0fa5ac-9268-4db9-8e40-42aca5111af9/volumes" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.484251 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"614cd874-917b-4851-b702-cfb170fcec4d","Type":"ContainerStarted","Data":"42cd88be08fd82a4678bc0ac63c3163f55582a4a54f8a506d4d149ba43a1808d"} Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.485625 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b10485db-da0e-493a-ad33-82634346be84","Type":"ContainerStarted","Data":"48e101ba7245ca5d8210e66d69736ebf24af5f1efaa81bc8c0cb7efb4b964beb"} Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.555916 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-pl75f"] Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.557453 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.559714 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.586951 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-pl75f"] Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.733280 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.733375 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.733598 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgp44\" (UniqueName: \"kubernetes.io/projected/7f756693-1fa0-41b1-8abf-4eb663ed232b-kube-api-access-pgp44\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.733697 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-svc\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.733798 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.733846 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.733872 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-config\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.835256 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.835385 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgp44\" (UniqueName: \"kubernetes.io/projected/7f756693-1fa0-41b1-8abf-4eb663ed232b-kube-api-access-pgp44\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.835422 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-svc\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.835463 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.835493 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.835517 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-config\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.835594 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.836737 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-svc\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.836840 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.836924 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.837037 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.837260 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.837480 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-config\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.856416 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgp44\" (UniqueName: \"kubernetes.io/projected/7f756693-1fa0-41b1-8abf-4eb663ed232b-kube-api-access-pgp44\") pod \"dnsmasq-dns-67b789f86c-pl75f\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:58 crc kubenswrapper[4992]: I1211 08:47:58.885461 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:47:59 crc kubenswrapper[4992]: I1211 08:47:59.352088 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-pl75f"] Dec 11 08:47:59 crc kubenswrapper[4992]: W1211 08:47:59.352411 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f756693_1fa0_41b1_8abf_4eb663ed232b.slice/crio-9622b70b7ea2983f440e4112a51ff931ad01fea9109895456ba4dd91a0b077a8 WatchSource:0}: Error finding container 9622b70b7ea2983f440e4112a51ff931ad01fea9109895456ba4dd91a0b077a8: Status 404 returned error can't find the container with id 9622b70b7ea2983f440e4112a51ff931ad01fea9109895456ba4dd91a0b077a8 Dec 11 08:47:59 crc kubenswrapper[4992]: I1211 08:47:59.501244 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b10485db-da0e-493a-ad33-82634346be84","Type":"ContainerStarted","Data":"b1ca116a1db6aca9cab82c9a52be54e892fa5fcbaaff000831dfa0cc7076c3f8"} Dec 11 08:47:59 crc kubenswrapper[4992]: I1211 08:47:59.505457 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" event={"ID":"7f756693-1fa0-41b1-8abf-4eb663ed232b","Type":"ContainerStarted","Data":"9622b70b7ea2983f440e4112a51ff931ad01fea9109895456ba4dd91a0b077a8"} Dec 11 08:48:00 crc kubenswrapper[4992]: I1211 08:48:00.515762 4992 generic.go:334] "Generic (PLEG): container finished" podID="7f756693-1fa0-41b1-8abf-4eb663ed232b" containerID="60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7" exitCode=0 Dec 11 08:48:00 crc kubenswrapper[4992]: I1211 08:48:00.515852 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" event={"ID":"7f756693-1fa0-41b1-8abf-4eb663ed232b","Type":"ContainerDied","Data":"60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7"} Dec 11 08:48:01 crc kubenswrapper[4992]: I1211 08:48:01.526182 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" event={"ID":"7f756693-1fa0-41b1-8abf-4eb663ed232b","Type":"ContainerStarted","Data":"a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d"} Dec 11 08:48:01 crc kubenswrapper[4992]: I1211 08:48:01.526784 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:48:01 crc kubenswrapper[4992]: I1211 08:48:01.545701 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" podStartSLOduration=3.5456836149999997 podStartE2EDuration="3.545683615s" podCreationTimestamp="2025-12-11 08:47:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:48:01.543334417 +0000 UTC m=+1505.802808353" watchObservedRunningTime="2025-12-11 08:48:01.545683615 +0000 UTC m=+1505.805157541" Dec 11 08:48:08 crc kubenswrapper[4992]: I1211 08:48:08.888462 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:48:08 crc kubenswrapper[4992]: I1211 08:48:08.962196 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-tzl55"] Dec 11 08:48:08 crc kubenswrapper[4992]: I1211 08:48:08.962718 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" podUID="daf07ee3-b6e9-4d16-972c-9df83d121006" containerName="dnsmasq-dns" containerID="cri-o://6a9fc6abaead35ec9384932e100c7af9b6e32b797b7afe3943fd2d6d2403bf7a" gracePeriod=10 Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.126755 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-cpgkh"] Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.128273 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.138270 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-cpgkh"] Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.221898 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.222071 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.222093 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.222142 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-config\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.222171 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66cq7\" (UniqueName: \"kubernetes.io/projected/7e8bbdf3-3509-4a6d-a1c4-decafb575016-kube-api-access-66cq7\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.222211 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.222290 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.324072 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.324236 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.324268 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.324297 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-config\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.324331 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66cq7\" (UniqueName: \"kubernetes.io/projected/7e8bbdf3-3509-4a6d-a1c4-decafb575016-kube-api-access-66cq7\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.324358 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.324794 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.325359 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.325387 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.325394 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.325533 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-config\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.325765 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.325860 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e8bbdf3-3509-4a6d-a1c4-decafb575016-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.364873 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66cq7\" (UniqueName: \"kubernetes.io/projected/7e8bbdf3-3509-4a6d-a1c4-decafb575016-kube-api-access-66cq7\") pod \"dnsmasq-dns-cb6ffcf87-cpgkh\" (UID: \"7e8bbdf3-3509-4a6d-a1c4-decafb575016\") " pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.475470 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.594046 4992 scope.go:117] "RemoveContainer" containerID="9542751233af15fde4b73199cfba6b71a6fd46915b44587491c57c229d08adaf" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.597289 4992 generic.go:334] "Generic (PLEG): container finished" podID="daf07ee3-b6e9-4d16-972c-9df83d121006" containerID="6a9fc6abaead35ec9384932e100c7af9b6e32b797b7afe3943fd2d6d2403bf7a" exitCode=0 Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.597343 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" event={"ID":"daf07ee3-b6e9-4d16-972c-9df83d121006","Type":"ContainerDied","Data":"6a9fc6abaead35ec9384932e100c7af9b6e32b797b7afe3943fd2d6d2403bf7a"} Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.597394 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" event={"ID":"daf07ee3-b6e9-4d16-972c-9df83d121006","Type":"ContainerDied","Data":"a50f60c0a931794d6ac61bfcb168af4c5720e3a218783689fb61ba083bde0fa2"} Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.597409 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a50f60c0a931794d6ac61bfcb168af4c5720e3a218783689fb61ba083bde0fa2" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.671486 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.834118 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-nb\") pod \"daf07ee3-b6e9-4d16-972c-9df83d121006\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.834521 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-svc\") pod \"daf07ee3-b6e9-4d16-972c-9df83d121006\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.834569 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-sb\") pod \"daf07ee3-b6e9-4d16-972c-9df83d121006\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.834691 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bttc7\" (UniqueName: \"kubernetes.io/projected/daf07ee3-b6e9-4d16-972c-9df83d121006-kube-api-access-bttc7\") pod \"daf07ee3-b6e9-4d16-972c-9df83d121006\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.834785 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-swift-storage-0\") pod \"daf07ee3-b6e9-4d16-972c-9df83d121006\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.834852 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-config\") pod \"daf07ee3-b6e9-4d16-972c-9df83d121006\" (UID: \"daf07ee3-b6e9-4d16-972c-9df83d121006\") " Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.840059 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf07ee3-b6e9-4d16-972c-9df83d121006-kube-api-access-bttc7" (OuterVolumeSpecName: "kube-api-access-bttc7") pod "daf07ee3-b6e9-4d16-972c-9df83d121006" (UID: "daf07ee3-b6e9-4d16-972c-9df83d121006"). InnerVolumeSpecName "kube-api-access-bttc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.886410 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-config" (OuterVolumeSpecName: "config") pod "daf07ee3-b6e9-4d16-972c-9df83d121006" (UID: "daf07ee3-b6e9-4d16-972c-9df83d121006"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.891731 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "daf07ee3-b6e9-4d16-972c-9df83d121006" (UID: "daf07ee3-b6e9-4d16-972c-9df83d121006"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.901773 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "daf07ee3-b6e9-4d16-972c-9df83d121006" (UID: "daf07ee3-b6e9-4d16-972c-9df83d121006"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.903261 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "daf07ee3-b6e9-4d16-972c-9df83d121006" (UID: "daf07ee3-b6e9-4d16-972c-9df83d121006"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.911679 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "daf07ee3-b6e9-4d16-972c-9df83d121006" (UID: "daf07ee3-b6e9-4d16-972c-9df83d121006"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.936871 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.937167 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.937237 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.937293 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bttc7\" (UniqueName: \"kubernetes.io/projected/daf07ee3-b6e9-4d16-972c-9df83d121006-kube-api-access-bttc7\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.937411 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.937473 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/daf07ee3-b6e9-4d16-972c-9df83d121006-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:09 crc kubenswrapper[4992]: I1211 08:48:09.941201 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-cpgkh"] Dec 11 08:48:10 crc kubenswrapper[4992]: I1211 08:48:10.608539 4992 generic.go:334] "Generic (PLEG): container finished" podID="7e8bbdf3-3509-4a6d-a1c4-decafb575016" containerID="6b70f0c91472b64564211b84a8f44f0cb7ae12f156b9ab4c1fe8d93d901d477d" exitCode=0 Dec 11 08:48:10 crc kubenswrapper[4992]: I1211 08:48:10.608667 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" event={"ID":"7e8bbdf3-3509-4a6d-a1c4-decafb575016","Type":"ContainerDied","Data":"6b70f0c91472b64564211b84a8f44f0cb7ae12f156b9ab4c1fe8d93d901d477d"} Dec 11 08:48:10 crc kubenswrapper[4992]: I1211 08:48:10.609031 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-tzl55" Dec 11 08:48:10 crc kubenswrapper[4992]: I1211 08:48:10.609050 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" event={"ID":"7e8bbdf3-3509-4a6d-a1c4-decafb575016","Type":"ContainerStarted","Data":"1449b587bbc640deeaeffba72805f4c08b81232cf03d15e3bf09d7d35021e752"} Dec 11 08:48:10 crc kubenswrapper[4992]: I1211 08:48:10.656801 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-tzl55"] Dec 11 08:48:10 crc kubenswrapper[4992]: I1211 08:48:10.666158 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-tzl55"] Dec 11 08:48:11 crc kubenswrapper[4992]: I1211 08:48:11.621974 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" event={"ID":"7e8bbdf3-3509-4a6d-a1c4-decafb575016","Type":"ContainerStarted","Data":"1e15c2c01191e4281866278b03d33d62562b868b14939b3268d2a43f334c986c"} Dec 11 08:48:11 crc kubenswrapper[4992]: I1211 08:48:11.622550 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:11 crc kubenswrapper[4992]: I1211 08:48:11.659394 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" podStartSLOduration=2.659371659 podStartE2EDuration="2.659371659s" podCreationTimestamp="2025-12-11 08:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:48:11.653690921 +0000 UTC m=+1515.913164847" watchObservedRunningTime="2025-12-11 08:48:11.659371659 +0000 UTC m=+1515.918845585" Dec 11 08:48:12 crc kubenswrapper[4992]: I1211 08:48:12.109744 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf07ee3-b6e9-4d16-972c-9df83d121006" path="/var/lib/kubelet/pods/daf07ee3-b6e9-4d16-972c-9df83d121006/volumes" Dec 11 08:48:19 crc kubenswrapper[4992]: I1211 08:48:19.477806 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-cpgkh" Dec 11 08:48:19 crc kubenswrapper[4992]: I1211 08:48:19.591571 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-pl75f"] Dec 11 08:48:19 crc kubenswrapper[4992]: I1211 08:48:19.600742 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" podUID="7f756693-1fa0-41b1-8abf-4eb663ed232b" containerName="dnsmasq-dns" containerID="cri-o://a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d" gracePeriod=10 Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.060128 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.133239 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-openstack-edpm-ipam\") pod \"7f756693-1fa0-41b1-8abf-4eb663ed232b\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.133406 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-config\") pod \"7f756693-1fa0-41b1-8abf-4eb663ed232b\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.133499 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-svc\") pod \"7f756693-1fa0-41b1-8abf-4eb663ed232b\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.133533 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgp44\" (UniqueName: \"kubernetes.io/projected/7f756693-1fa0-41b1-8abf-4eb663ed232b-kube-api-access-pgp44\") pod \"7f756693-1fa0-41b1-8abf-4eb663ed232b\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.133566 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-swift-storage-0\") pod \"7f756693-1fa0-41b1-8abf-4eb663ed232b\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.133592 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-sb\") pod \"7f756693-1fa0-41b1-8abf-4eb663ed232b\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.133729 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-nb\") pod \"7f756693-1fa0-41b1-8abf-4eb663ed232b\" (UID: \"7f756693-1fa0-41b1-8abf-4eb663ed232b\") " Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.139958 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f756693-1fa0-41b1-8abf-4eb663ed232b-kube-api-access-pgp44" (OuterVolumeSpecName: "kube-api-access-pgp44") pod "7f756693-1fa0-41b1-8abf-4eb663ed232b" (UID: "7f756693-1fa0-41b1-8abf-4eb663ed232b"). InnerVolumeSpecName "kube-api-access-pgp44". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.186650 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f756693-1fa0-41b1-8abf-4eb663ed232b" (UID: "7f756693-1fa0-41b1-8abf-4eb663ed232b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.187295 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f756693-1fa0-41b1-8abf-4eb663ed232b" (UID: "7f756693-1fa0-41b1-8abf-4eb663ed232b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.192498 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f756693-1fa0-41b1-8abf-4eb663ed232b" (UID: "7f756693-1fa0-41b1-8abf-4eb663ed232b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.195874 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "7f756693-1fa0-41b1-8abf-4eb663ed232b" (UID: "7f756693-1fa0-41b1-8abf-4eb663ed232b"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.204182 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-config" (OuterVolumeSpecName: "config") pod "7f756693-1fa0-41b1-8abf-4eb663ed232b" (UID: "7f756693-1fa0-41b1-8abf-4eb663ed232b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.210548 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f756693-1fa0-41b1-8abf-4eb663ed232b" (UID: "7f756693-1fa0-41b1-8abf-4eb663ed232b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.236560 4992 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-config\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.236891 4992 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.236952 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgp44\" (UniqueName: \"kubernetes.io/projected/7f756693-1fa0-41b1-8abf-4eb663ed232b-kube-api-access-pgp44\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.237011 4992 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.237154 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.237220 4992 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.237294 4992 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7f756693-1fa0-41b1-8abf-4eb663ed232b-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.707020 4992 generic.go:334] "Generic (PLEG): container finished" podID="7f756693-1fa0-41b1-8abf-4eb663ed232b" containerID="a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d" exitCode=0 Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.707072 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" event={"ID":"7f756693-1fa0-41b1-8abf-4eb663ed232b","Type":"ContainerDied","Data":"a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d"} Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.707118 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" event={"ID":"7f756693-1fa0-41b1-8abf-4eb663ed232b","Type":"ContainerDied","Data":"9622b70b7ea2983f440e4112a51ff931ad01fea9109895456ba4dd91a0b077a8"} Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.707155 4992 scope.go:117] "RemoveContainer" containerID="a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.707829 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-pl75f" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.767288 4992 scope.go:117] "RemoveContainer" containerID="60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.770362 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-pl75f"] Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.778741 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-pl75f"] Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.818836 4992 scope.go:117] "RemoveContainer" containerID="a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d" Dec 11 08:48:20 crc kubenswrapper[4992]: E1211 08:48:20.819873 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d\": container with ID starting with a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d not found: ID does not exist" containerID="a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.819939 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d"} err="failed to get container status \"a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d\": rpc error: code = NotFound desc = could not find container \"a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d\": container with ID starting with a5a4c3231c946d872998b4e9e4afc36dc0afea2b616cd318d02929670010537d not found: ID does not exist" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.819976 4992 scope.go:117] "RemoveContainer" containerID="60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7" Dec 11 08:48:20 crc kubenswrapper[4992]: E1211 08:48:20.820478 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7\": container with ID starting with 60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7 not found: ID does not exist" containerID="60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7" Dec 11 08:48:20 crc kubenswrapper[4992]: I1211 08:48:20.820521 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7"} err="failed to get container status \"60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7\": rpc error: code = NotFound desc = could not find container \"60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7\": container with ID starting with 60c0ecf90a50bf51c6efb8c545dbdedcec40ec15a75f891c55307c4530b6aba7 not found: ID does not exist" Dec 11 08:48:22 crc kubenswrapper[4992]: I1211 08:48:22.108622 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f756693-1fa0-41b1-8abf-4eb663ed232b" path="/var/lib/kubelet/pods/7f756693-1fa0-41b1-8abf-4eb663ed232b/volumes" Dec 11 08:48:30 crc kubenswrapper[4992]: I1211 08:48:30.799504 4992 generic.go:334] "Generic (PLEG): container finished" podID="614cd874-917b-4851-b702-cfb170fcec4d" containerID="42cd88be08fd82a4678bc0ac63c3163f55582a4a54f8a506d4d149ba43a1808d" exitCode=0 Dec 11 08:48:30 crc kubenswrapper[4992]: I1211 08:48:30.799576 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"614cd874-917b-4851-b702-cfb170fcec4d","Type":"ContainerDied","Data":"42cd88be08fd82a4678bc0ac63c3163f55582a4a54f8a506d4d149ba43a1808d"} Dec 11 08:48:31 crc kubenswrapper[4992]: I1211 08:48:31.810403 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"614cd874-917b-4851-b702-cfb170fcec4d","Type":"ContainerStarted","Data":"c9718fdf89293b61579335044d51e914e4d7fcb1b8694a02ddd057888fd89256"} Dec 11 08:48:31 crc kubenswrapper[4992]: I1211 08:48:31.810914 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 08:48:31 crc kubenswrapper[4992]: I1211 08:48:31.812234 4992 generic.go:334] "Generic (PLEG): container finished" podID="b10485db-da0e-493a-ad33-82634346be84" containerID="b1ca116a1db6aca9cab82c9a52be54e892fa5fcbaaff000831dfa0cc7076c3f8" exitCode=0 Dec 11 08:48:31 crc kubenswrapper[4992]: I1211 08:48:31.812270 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b10485db-da0e-493a-ad33-82634346be84","Type":"ContainerDied","Data":"b1ca116a1db6aca9cab82c9a52be54e892fa5fcbaaff000831dfa0cc7076c3f8"} Dec 11 08:48:31 crc kubenswrapper[4992]: I1211 08:48:31.842386 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.842362885 podStartE2EDuration="36.842362885s" podCreationTimestamp="2025-12-11 08:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:48:31.836766859 +0000 UTC m=+1536.096240795" watchObservedRunningTime="2025-12-11 08:48:31.842362885 +0000 UTC m=+1536.101836811" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.583868 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n"] Dec 11 08:48:32 crc kubenswrapper[4992]: E1211 08:48:32.584569 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f756693-1fa0-41b1-8abf-4eb663ed232b" containerName="dnsmasq-dns" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.584587 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f756693-1fa0-41b1-8abf-4eb663ed232b" containerName="dnsmasq-dns" Dec 11 08:48:32 crc kubenswrapper[4992]: E1211 08:48:32.584604 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf07ee3-b6e9-4d16-972c-9df83d121006" containerName="dnsmasq-dns" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.584611 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf07ee3-b6e9-4d16-972c-9df83d121006" containerName="dnsmasq-dns" Dec 11 08:48:32 crc kubenswrapper[4992]: E1211 08:48:32.584624 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf07ee3-b6e9-4d16-972c-9df83d121006" containerName="init" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.584645 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf07ee3-b6e9-4d16-972c-9df83d121006" containerName="init" Dec 11 08:48:32 crc kubenswrapper[4992]: E1211 08:48:32.584670 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f756693-1fa0-41b1-8abf-4eb663ed232b" containerName="init" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.584675 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f756693-1fa0-41b1-8abf-4eb663ed232b" containerName="init" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.584869 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf07ee3-b6e9-4d16-972c-9df83d121006" containerName="dnsmasq-dns" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.584883 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f756693-1fa0-41b1-8abf-4eb663ed232b" containerName="dnsmasq-dns" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.585548 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.587234 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.587472 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.587901 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.588909 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.595647 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n"] Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.667251 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.668517 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.668716 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hzx8\" (UniqueName: \"kubernetes.io/projected/540586d6-4da9-4c8e-9866-dbe51de9f643-kube-api-access-6hzx8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.669126 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.771215 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.771363 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.771979 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.772027 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hzx8\" (UniqueName: \"kubernetes.io/projected/540586d6-4da9-4c8e-9866-dbe51de9f643-kube-api-access-6hzx8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.775979 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.776239 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.789948 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.794691 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hzx8\" (UniqueName: \"kubernetes.io/projected/540586d6-4da9-4c8e-9866-dbe51de9f643-kube-api-access-6hzx8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.827463 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b10485db-da0e-493a-ad33-82634346be84","Type":"ContainerStarted","Data":"4b944e1dd28dfd905f6f43ecd38b49dfb1f21c059f6cbaf100f8be6975bd1a3e"} Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.827784 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.858864 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.858843826 podStartE2EDuration="36.858843826s" podCreationTimestamp="2025-12-11 08:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 08:48:32.848021522 +0000 UTC m=+1537.107495448" watchObservedRunningTime="2025-12-11 08:48:32.858843826 +0000 UTC m=+1537.118317762" Dec 11 08:48:32 crc kubenswrapper[4992]: I1211 08:48:32.908906 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:33 crc kubenswrapper[4992]: I1211 08:48:33.460286 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n"] Dec 11 08:48:33 crc kubenswrapper[4992]: I1211 08:48:33.838559 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" event={"ID":"540586d6-4da9-4c8e-9866-dbe51de9f643","Type":"ContainerStarted","Data":"3e9aa9a62bdc81f4b598508070d553397e6bd5a0e3c7ca002baae44d9ea0609a"} Dec 11 08:48:35 crc kubenswrapper[4992]: I1211 08:48:35.378574 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:48:35 crc kubenswrapper[4992]: I1211 08:48:35.379340 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:48:45 crc kubenswrapper[4992]: I1211 08:48:45.858812 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 08:48:45 crc kubenswrapper[4992]: I1211 08:48:45.956535 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" event={"ID":"540586d6-4da9-4c8e-9866-dbe51de9f643","Type":"ContainerStarted","Data":"e52d2de118003131cde0c138b0e4b2ddd231fcd7741eb21f0554954037ee3386"} Dec 11 08:48:45 crc kubenswrapper[4992]: I1211 08:48:45.975380 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" podStartSLOduration=1.872796626 podStartE2EDuration="13.975356937s" podCreationTimestamp="2025-12-11 08:48:32 +0000 UTC" firstStartedPulling="2025-12-11 08:48:33.462539824 +0000 UTC m=+1537.722013740" lastFinishedPulling="2025-12-11 08:48:45.565100115 +0000 UTC m=+1549.824574051" observedRunningTime="2025-12-11 08:48:45.97137462 +0000 UTC m=+1550.230848556" watchObservedRunningTime="2025-12-11 08:48:45.975356937 +0000 UTC m=+1550.234830863" Dec 11 08:48:46 crc kubenswrapper[4992]: I1211 08:48:46.961788 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 08:48:57 crc kubenswrapper[4992]: I1211 08:48:57.058863 4992 generic.go:334] "Generic (PLEG): container finished" podID="540586d6-4da9-4c8e-9866-dbe51de9f643" containerID="e52d2de118003131cde0c138b0e4b2ddd231fcd7741eb21f0554954037ee3386" exitCode=0 Dec 11 08:48:57 crc kubenswrapper[4992]: I1211 08:48:57.058941 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" event={"ID":"540586d6-4da9-4c8e-9866-dbe51de9f643","Type":"ContainerDied","Data":"e52d2de118003131cde0c138b0e4b2ddd231fcd7741eb21f0554954037ee3386"} Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.501973 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.611678 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-repo-setup-combined-ca-bundle\") pod \"540586d6-4da9-4c8e-9866-dbe51de9f643\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.611757 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-inventory\") pod \"540586d6-4da9-4c8e-9866-dbe51de9f643\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.611814 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-ssh-key\") pod \"540586d6-4da9-4c8e-9866-dbe51de9f643\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.611837 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hzx8\" (UniqueName: \"kubernetes.io/projected/540586d6-4da9-4c8e-9866-dbe51de9f643-kube-api-access-6hzx8\") pod \"540586d6-4da9-4c8e-9866-dbe51de9f643\" (UID: \"540586d6-4da9-4c8e-9866-dbe51de9f643\") " Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.617806 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "540586d6-4da9-4c8e-9866-dbe51de9f643" (UID: "540586d6-4da9-4c8e-9866-dbe51de9f643"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.620924 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/540586d6-4da9-4c8e-9866-dbe51de9f643-kube-api-access-6hzx8" (OuterVolumeSpecName: "kube-api-access-6hzx8") pod "540586d6-4da9-4c8e-9866-dbe51de9f643" (UID: "540586d6-4da9-4c8e-9866-dbe51de9f643"). InnerVolumeSpecName "kube-api-access-6hzx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.643472 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "540586d6-4da9-4c8e-9866-dbe51de9f643" (UID: "540586d6-4da9-4c8e-9866-dbe51de9f643"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.643669 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-inventory" (OuterVolumeSpecName: "inventory") pod "540586d6-4da9-4c8e-9866-dbe51de9f643" (UID: "540586d6-4da9-4c8e-9866-dbe51de9f643"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.714183 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.714221 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hzx8\" (UniqueName: \"kubernetes.io/projected/540586d6-4da9-4c8e-9866-dbe51de9f643-kube-api-access-6hzx8\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.714235 4992 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:58 crc kubenswrapper[4992]: I1211 08:48:58.714244 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540586d6-4da9-4c8e-9866-dbe51de9f643-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.080035 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" event={"ID":"540586d6-4da9-4c8e-9866-dbe51de9f643","Type":"ContainerDied","Data":"3e9aa9a62bdc81f4b598508070d553397e6bd5a0e3c7ca002baae44d9ea0609a"} Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.080100 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e9aa9a62bdc81f4b598508070d553397e6bd5a0e3c7ca002baae44d9ea0609a" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.080102 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.149002 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k"] Dec 11 08:48:59 crc kubenswrapper[4992]: E1211 08:48:59.149455 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540586d6-4da9-4c8e-9866-dbe51de9f643" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.149474 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="540586d6-4da9-4c8e-9866-dbe51de9f643" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.149730 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="540586d6-4da9-4c8e-9866-dbe51de9f643" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.150502 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.154480 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.154714 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.154869 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.155361 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.159623 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k"] Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.225622 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nwx9k\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.225836 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqkgw\" (UniqueName: \"kubernetes.io/projected/ff4798f8-6563-4d95-ab98-252c0417f16f-kube-api-access-tqkgw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nwx9k\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.225896 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nwx9k\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.327983 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nwx9k\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.328045 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqkgw\" (UniqueName: \"kubernetes.io/projected/ff4798f8-6563-4d95-ab98-252c0417f16f-kube-api-access-tqkgw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nwx9k\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.328103 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nwx9k\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.334417 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nwx9k\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.334975 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nwx9k\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.345203 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqkgw\" (UniqueName: \"kubernetes.io/projected/ff4798f8-6563-4d95-ab98-252c0417f16f-kube-api-access-tqkgw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-nwx9k\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.507127 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:48:59 crc kubenswrapper[4992]: I1211 08:48:59.987620 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k"] Dec 11 08:49:00 crc kubenswrapper[4992]: I1211 08:49:00.091524 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" event={"ID":"ff4798f8-6563-4d95-ab98-252c0417f16f","Type":"ContainerStarted","Data":"46cd5993a3f2061b14e4c6ffa7b8f74c0dd1008eb14109ba2a4add72a98457dc"} Dec 11 08:49:01 crc kubenswrapper[4992]: I1211 08:49:01.101108 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" event={"ID":"ff4798f8-6563-4d95-ab98-252c0417f16f","Type":"ContainerStarted","Data":"6b10cd8ea4f982592673347c67ea9f149e1be0014507e34abdddd1b7f785de58"} Dec 11 08:49:01 crc kubenswrapper[4992]: I1211 08:49:01.124046 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" podStartSLOduration=1.392574119 podStartE2EDuration="2.124022431s" podCreationTimestamp="2025-12-11 08:48:59 +0000 UTC" firstStartedPulling="2025-12-11 08:48:59.995963539 +0000 UTC m=+1564.255437465" lastFinishedPulling="2025-12-11 08:49:00.727411851 +0000 UTC m=+1564.986885777" observedRunningTime="2025-12-11 08:49:01.118747532 +0000 UTC m=+1565.378221488" watchObservedRunningTime="2025-12-11 08:49:01.124022431 +0000 UTC m=+1565.383496367" Dec 11 08:49:04 crc kubenswrapper[4992]: I1211 08:49:04.125852 4992 generic.go:334] "Generic (PLEG): container finished" podID="ff4798f8-6563-4d95-ab98-252c0417f16f" containerID="6b10cd8ea4f982592673347c67ea9f149e1be0014507e34abdddd1b7f785de58" exitCode=0 Dec 11 08:49:04 crc kubenswrapper[4992]: I1211 08:49:04.125929 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" event={"ID":"ff4798f8-6563-4d95-ab98-252c0417f16f","Type":"ContainerDied","Data":"6b10cd8ea4f982592673347c67ea9f149e1be0014507e34abdddd1b7f785de58"} Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.378387 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.378712 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.537087 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.641541 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqkgw\" (UniqueName: \"kubernetes.io/projected/ff4798f8-6563-4d95-ab98-252c0417f16f-kube-api-access-tqkgw\") pod \"ff4798f8-6563-4d95-ab98-252c0417f16f\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.641684 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-ssh-key\") pod \"ff4798f8-6563-4d95-ab98-252c0417f16f\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.641759 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-inventory\") pod \"ff4798f8-6563-4d95-ab98-252c0417f16f\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.646759 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4798f8-6563-4d95-ab98-252c0417f16f-kube-api-access-tqkgw" (OuterVolumeSpecName: "kube-api-access-tqkgw") pod "ff4798f8-6563-4d95-ab98-252c0417f16f" (UID: "ff4798f8-6563-4d95-ab98-252c0417f16f"). InnerVolumeSpecName "kube-api-access-tqkgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:49:05 crc kubenswrapper[4992]: E1211 08:49:05.665935 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-ssh-key podName:ff4798f8-6563-4d95-ab98-252c0417f16f nodeName:}" failed. No retries permitted until 2025-12-11 08:49:06.165908888 +0000 UTC m=+1570.425382814 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key" (UniqueName: "kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-ssh-key") pod "ff4798f8-6563-4d95-ab98-252c0417f16f" (UID: "ff4798f8-6563-4d95-ab98-252c0417f16f") : error deleting /var/lib/kubelet/pods/ff4798f8-6563-4d95-ab98-252c0417f16f/volume-subpaths: remove /var/lib/kubelet/pods/ff4798f8-6563-4d95-ab98-252c0417f16f/volume-subpaths: no such file or directory Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.668468 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-inventory" (OuterVolumeSpecName: "inventory") pod "ff4798f8-6563-4d95-ab98-252c0417f16f" (UID: "ff4798f8-6563-4d95-ab98-252c0417f16f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.743671 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:49:05 crc kubenswrapper[4992]: I1211 08:49:05.743707 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqkgw\" (UniqueName: \"kubernetes.io/projected/ff4798f8-6563-4d95-ab98-252c0417f16f-kube-api-access-tqkgw\") on node \"crc\" DevicePath \"\"" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.149250 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" event={"ID":"ff4798f8-6563-4d95-ab98-252c0417f16f","Type":"ContainerDied","Data":"46cd5993a3f2061b14e4c6ffa7b8f74c0dd1008eb14109ba2a4add72a98457dc"} Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.149291 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46cd5993a3f2061b14e4c6ffa7b8f74c0dd1008eb14109ba2a4add72a98457dc" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.149335 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-nwx9k" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.237902 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7"] Dec 11 08:49:06 crc kubenswrapper[4992]: E1211 08:49:06.238877 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4798f8-6563-4d95-ab98-252c0417f16f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.238903 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4798f8-6563-4d95-ab98-252c0417f16f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.239481 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4798f8-6563-4d95-ab98-252c0417f16f" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.240674 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.254891 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-ssh-key\") pod \"ff4798f8-6563-4d95-ab98-252c0417f16f\" (UID: \"ff4798f8-6563-4d95-ab98-252c0417f16f\") " Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.255921 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.256105 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.256212 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k7tq\" (UniqueName: \"kubernetes.io/projected/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-kube-api-access-7k7tq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.256310 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.259697 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7"] Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.264989 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ff4798f8-6563-4d95-ab98-252c0417f16f" (UID: "ff4798f8-6563-4d95-ab98-252c0417f16f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.357400 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k7tq\" (UniqueName: \"kubernetes.io/projected/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-kube-api-access-7k7tq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.357469 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.357542 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.357603 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.357724 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ff4798f8-6563-4d95-ab98-252c0417f16f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.361277 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.361462 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.362714 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.373984 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k7tq\" (UniqueName: \"kubernetes.io/projected/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-kube-api-access-7k7tq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:06 crc kubenswrapper[4992]: I1211 08:49:06.618743 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:49:07 crc kubenswrapper[4992]: I1211 08:49:07.199040 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7"] Dec 11 08:49:07 crc kubenswrapper[4992]: W1211 08:49:07.202843 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6066a587_fdc9_4ae8_ad82_4ddf1844f9e6.slice/crio-8c7bbc2fe4dac9825e6d3142bfb29a75b16047473695293c69fd7d47260b305e WatchSource:0}: Error finding container 8c7bbc2fe4dac9825e6d3142bfb29a75b16047473695293c69fd7d47260b305e: Status 404 returned error can't find the container with id 8c7bbc2fe4dac9825e6d3142bfb29a75b16047473695293c69fd7d47260b305e Dec 11 08:49:08 crc kubenswrapper[4992]: I1211 08:49:08.168533 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" event={"ID":"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6","Type":"ContainerStarted","Data":"64bbb46f78de345351d85b260130f775dbaf065cbea2cbb43732bad237a1146e"} Dec 11 08:49:08 crc kubenswrapper[4992]: I1211 08:49:08.168964 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" event={"ID":"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6","Type":"ContainerStarted","Data":"8c7bbc2fe4dac9825e6d3142bfb29a75b16047473695293c69fd7d47260b305e"} Dec 11 08:49:08 crc kubenswrapper[4992]: I1211 08:49:08.186758 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" podStartSLOduration=1.778341408 podStartE2EDuration="2.186738424s" podCreationTimestamp="2025-12-11 08:49:06 +0000 UTC" firstStartedPulling="2025-12-11 08:49:07.204814216 +0000 UTC m=+1571.464288142" lastFinishedPulling="2025-12-11 08:49:07.613211232 +0000 UTC m=+1571.872685158" observedRunningTime="2025-12-11 08:49:08.180905242 +0000 UTC m=+1572.440379168" watchObservedRunningTime="2025-12-11 08:49:08.186738424 +0000 UTC m=+1572.446212350" Dec 11 08:49:09 crc kubenswrapper[4992]: I1211 08:49:09.733350 4992 scope.go:117] "RemoveContainer" containerID="7cb784d206c5ffadbdbc364d7da11f757d984a6faece0f465dc5454e5623b3d7" Dec 11 08:49:35 crc kubenswrapper[4992]: I1211 08:49:35.379334 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:49:35 crc kubenswrapper[4992]: I1211 08:49:35.379961 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:49:35 crc kubenswrapper[4992]: I1211 08:49:35.380007 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:49:35 crc kubenswrapper[4992]: I1211 08:49:35.380896 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 08:49:35 crc kubenswrapper[4992]: I1211 08:49:35.380970 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" gracePeriod=600 Dec 11 08:49:36 crc kubenswrapper[4992]: E1211 08:49:36.174604 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:49:36 crc kubenswrapper[4992]: I1211 08:49:36.441894 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" exitCode=0 Dec 11 08:49:36 crc kubenswrapper[4992]: I1211 08:49:36.441952 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69"} Dec 11 08:49:36 crc kubenswrapper[4992]: I1211 08:49:36.442832 4992 scope.go:117] "RemoveContainer" containerID="c9b7b9751f69dacb432c6111e285d3e2e47bd2a6e7fe288f0f982e3e58b7bafb" Dec 11 08:49:36 crc kubenswrapper[4992]: I1211 08:49:36.443611 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:49:36 crc kubenswrapper[4992]: E1211 08:49:36.444063 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:49:50 crc kubenswrapper[4992]: I1211 08:49:50.097421 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:49:50 crc kubenswrapper[4992]: E1211 08:49:50.098624 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:50:02 crc kubenswrapper[4992]: I1211 08:50:02.095943 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:50:02 crc kubenswrapper[4992]: E1211 08:50:02.097889 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:50:15 crc kubenswrapper[4992]: I1211 08:50:15.095242 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:50:15 crc kubenswrapper[4992]: E1211 08:50:15.096113 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.119268 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qb9t6"] Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.121910 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.134078 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb9t6"] Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.265547 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-catalog-content\") pod \"redhat-marketplace-qb9t6\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.266300 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7w5q\" (UniqueName: \"kubernetes.io/projected/03808191-cbe3-4e41-bfe3-6836a6cba768-kube-api-access-t7w5q\") pod \"redhat-marketplace-qb9t6\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.266429 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-utilities\") pod \"redhat-marketplace-qb9t6\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.369052 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-catalog-content\") pod \"redhat-marketplace-qb9t6\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.369154 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7w5q\" (UniqueName: \"kubernetes.io/projected/03808191-cbe3-4e41-bfe3-6836a6cba768-kube-api-access-t7w5q\") pod \"redhat-marketplace-qb9t6\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.369184 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-utilities\") pod \"redhat-marketplace-qb9t6\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.369617 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-catalog-content\") pod \"redhat-marketplace-qb9t6\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.369754 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-utilities\") pod \"redhat-marketplace-qb9t6\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.390125 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7w5q\" (UniqueName: \"kubernetes.io/projected/03808191-cbe3-4e41-bfe3-6836a6cba768-kube-api-access-t7w5q\") pod \"redhat-marketplace-qb9t6\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.455538 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:27 crc kubenswrapper[4992]: I1211 08:50:27.936698 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb9t6"] Dec 11 08:50:28 crc kubenswrapper[4992]: I1211 08:50:28.909416 4992 generic.go:334] "Generic (PLEG): container finished" podID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerID="4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34" exitCode=0 Dec 11 08:50:28 crc kubenswrapper[4992]: I1211 08:50:28.909486 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb9t6" event={"ID":"03808191-cbe3-4e41-bfe3-6836a6cba768","Type":"ContainerDied","Data":"4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34"} Dec 11 08:50:28 crc kubenswrapper[4992]: I1211 08:50:28.910095 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb9t6" event={"ID":"03808191-cbe3-4e41-bfe3-6836a6cba768","Type":"ContainerStarted","Data":"81b9eea0c2dbf3acf8f5251c9b1b10dfff8be37eaef09876457874980af8264b"} Dec 11 08:50:28 crc kubenswrapper[4992]: I1211 08:50:28.911544 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 08:50:30 crc kubenswrapper[4992]: I1211 08:50:30.095516 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:50:30 crc kubenswrapper[4992]: E1211 08:50:30.095859 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:50:30 crc kubenswrapper[4992]: I1211 08:50:30.930703 4992 generic.go:334] "Generic (PLEG): container finished" podID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerID="e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf" exitCode=0 Dec 11 08:50:30 crc kubenswrapper[4992]: I1211 08:50:30.930774 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb9t6" event={"ID":"03808191-cbe3-4e41-bfe3-6836a6cba768","Type":"ContainerDied","Data":"e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf"} Dec 11 08:50:32 crc kubenswrapper[4992]: I1211 08:50:32.952511 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb9t6" event={"ID":"03808191-cbe3-4e41-bfe3-6836a6cba768","Type":"ContainerStarted","Data":"5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d"} Dec 11 08:50:32 crc kubenswrapper[4992]: I1211 08:50:32.981054 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qb9t6" podStartSLOduration=2.859814401 podStartE2EDuration="5.981036907s" podCreationTimestamp="2025-12-11 08:50:27 +0000 UTC" firstStartedPulling="2025-12-11 08:50:28.911336267 +0000 UTC m=+1653.170810183" lastFinishedPulling="2025-12-11 08:50:32.032558753 +0000 UTC m=+1656.292032689" observedRunningTime="2025-12-11 08:50:32.978194678 +0000 UTC m=+1657.237668624" watchObservedRunningTime="2025-12-11 08:50:32.981036907 +0000 UTC m=+1657.240510833" Dec 11 08:50:37 crc kubenswrapper[4992]: I1211 08:50:37.456243 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:37 crc kubenswrapper[4992]: I1211 08:50:37.456902 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:37 crc kubenswrapper[4992]: I1211 08:50:37.504927 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:38 crc kubenswrapper[4992]: I1211 08:50:38.046472 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:38 crc kubenswrapper[4992]: I1211 08:50:38.106989 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb9t6"] Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.015023 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qb9t6" podUID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerName="registry-server" containerID="cri-o://5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d" gracePeriod=2 Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.459823 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.631733 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-utilities\") pod \"03808191-cbe3-4e41-bfe3-6836a6cba768\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.631960 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7w5q\" (UniqueName: \"kubernetes.io/projected/03808191-cbe3-4e41-bfe3-6836a6cba768-kube-api-access-t7w5q\") pod \"03808191-cbe3-4e41-bfe3-6836a6cba768\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.632093 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-catalog-content\") pod \"03808191-cbe3-4e41-bfe3-6836a6cba768\" (UID: \"03808191-cbe3-4e41-bfe3-6836a6cba768\") " Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.632883 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-utilities" (OuterVolumeSpecName: "utilities") pod "03808191-cbe3-4e41-bfe3-6836a6cba768" (UID: "03808191-cbe3-4e41-bfe3-6836a6cba768"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.640654 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03808191-cbe3-4e41-bfe3-6836a6cba768-kube-api-access-t7w5q" (OuterVolumeSpecName: "kube-api-access-t7w5q") pod "03808191-cbe3-4e41-bfe3-6836a6cba768" (UID: "03808191-cbe3-4e41-bfe3-6836a6cba768"). InnerVolumeSpecName "kube-api-access-t7w5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.657044 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03808191-cbe3-4e41-bfe3-6836a6cba768" (UID: "03808191-cbe3-4e41-bfe3-6836a6cba768"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.734677 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7w5q\" (UniqueName: \"kubernetes.io/projected/03808191-cbe3-4e41-bfe3-6836a6cba768-kube-api-access-t7w5q\") on node \"crc\" DevicePath \"\"" Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.734717 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:50:40 crc kubenswrapper[4992]: I1211 08:50:40.734727 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03808191-cbe3-4e41-bfe3-6836a6cba768-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.028212 4992 generic.go:334] "Generic (PLEG): container finished" podID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerID="5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d" exitCode=0 Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.028302 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb9t6" event={"ID":"03808191-cbe3-4e41-bfe3-6836a6cba768","Type":"ContainerDied","Data":"5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d"} Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.028574 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb9t6" event={"ID":"03808191-cbe3-4e41-bfe3-6836a6cba768","Type":"ContainerDied","Data":"81b9eea0c2dbf3acf8f5251c9b1b10dfff8be37eaef09876457874980af8264b"} Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.028612 4992 scope.go:117] "RemoveContainer" containerID="5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.028361 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb9t6" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.050249 4992 scope.go:117] "RemoveContainer" containerID="e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.076318 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb9t6"] Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.088578 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb9t6"] Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.090547 4992 scope.go:117] "RemoveContainer" containerID="4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.130206 4992 scope.go:117] "RemoveContainer" containerID="5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d" Dec 11 08:50:41 crc kubenswrapper[4992]: E1211 08:50:41.130754 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d\": container with ID starting with 5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d not found: ID does not exist" containerID="5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.130786 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d"} err="failed to get container status \"5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d\": rpc error: code = NotFound desc = could not find container \"5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d\": container with ID starting with 5e8937d79dae8c2de4245c5fd60e2eccde6e3df6c29adc7502b7f9ef490ad26d not found: ID does not exist" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.130807 4992 scope.go:117] "RemoveContainer" containerID="e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf" Dec 11 08:50:41 crc kubenswrapper[4992]: E1211 08:50:41.131087 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf\": container with ID starting with e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf not found: ID does not exist" containerID="e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.131111 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf"} err="failed to get container status \"e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf\": rpc error: code = NotFound desc = could not find container \"e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf\": container with ID starting with e602e9af67dc69965fa948da279dfdc7bf16b4a7057ef6cd2d35d263a82d0eaf not found: ID does not exist" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.131124 4992 scope.go:117] "RemoveContainer" containerID="4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34" Dec 11 08:50:41 crc kubenswrapper[4992]: E1211 08:50:41.131367 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34\": container with ID starting with 4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34 not found: ID does not exist" containerID="4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34" Dec 11 08:50:41 crc kubenswrapper[4992]: I1211 08:50:41.131399 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34"} err="failed to get container status \"4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34\": rpc error: code = NotFound desc = could not find container \"4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34\": container with ID starting with 4bb00ffa6e492675e87e911b84ac5396dbc484115a5992b7503b010fe5488f34 not found: ID does not exist" Dec 11 08:50:42 crc kubenswrapper[4992]: I1211 08:50:42.107731 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03808191-cbe3-4e41-bfe3-6836a6cba768" path="/var/lib/kubelet/pods/03808191-cbe3-4e41-bfe3-6836a6cba768/volumes" Dec 11 08:50:44 crc kubenswrapper[4992]: I1211 08:50:44.094921 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:50:44 crc kubenswrapper[4992]: E1211 08:50:44.095468 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:50:48 crc kubenswrapper[4992]: I1211 08:50:48.890049 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cjhxz"] Dec 11 08:50:48 crc kubenswrapper[4992]: E1211 08:50:48.891203 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerName="registry-server" Dec 11 08:50:48 crc kubenswrapper[4992]: I1211 08:50:48.891221 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerName="registry-server" Dec 11 08:50:48 crc kubenswrapper[4992]: E1211 08:50:48.891236 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerName="extract-content" Dec 11 08:50:48 crc kubenswrapper[4992]: I1211 08:50:48.891246 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerName="extract-content" Dec 11 08:50:48 crc kubenswrapper[4992]: E1211 08:50:48.891285 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerName="extract-utilities" Dec 11 08:50:48 crc kubenswrapper[4992]: I1211 08:50:48.891294 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerName="extract-utilities" Dec 11 08:50:48 crc kubenswrapper[4992]: I1211 08:50:48.891540 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="03808191-cbe3-4e41-bfe3-6836a6cba768" containerName="registry-server" Dec 11 08:50:48 crc kubenswrapper[4992]: I1211 08:50:48.894261 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:48 crc kubenswrapper[4992]: I1211 08:50:48.900774 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjhxz"] Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.002334 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jrjp\" (UniqueName: \"kubernetes.io/projected/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-kube-api-access-2jrjp\") pod \"certified-operators-cjhxz\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.002417 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-catalog-content\") pod \"certified-operators-cjhxz\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.002457 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-utilities\") pod \"certified-operators-cjhxz\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.103660 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jrjp\" (UniqueName: \"kubernetes.io/projected/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-kube-api-access-2jrjp\") pod \"certified-operators-cjhxz\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.104088 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-catalog-content\") pod \"certified-operators-cjhxz\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.104671 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-utilities\") pod \"certified-operators-cjhxz\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.104599 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-catalog-content\") pod \"certified-operators-cjhxz\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.105006 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-utilities\") pod \"certified-operators-cjhxz\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.126361 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jrjp\" (UniqueName: \"kubernetes.io/projected/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-kube-api-access-2jrjp\") pod \"certified-operators-cjhxz\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.238234 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:49 crc kubenswrapper[4992]: I1211 08:50:49.511133 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cjhxz"] Dec 11 08:50:50 crc kubenswrapper[4992]: I1211 08:50:50.114534 4992 generic.go:334] "Generic (PLEG): container finished" podID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerID="6b21a34a491759f6e049e2faea5b6913a7361e51f547642e9eaa7592d13987fe" exitCode=0 Dec 11 08:50:50 crc kubenswrapper[4992]: I1211 08:50:50.114570 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjhxz" event={"ID":"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9","Type":"ContainerDied","Data":"6b21a34a491759f6e049e2faea5b6913a7361e51f547642e9eaa7592d13987fe"} Dec 11 08:50:50 crc kubenswrapper[4992]: I1211 08:50:50.114593 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjhxz" event={"ID":"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9","Type":"ContainerStarted","Data":"f26e4ff537ce4e8b96aa20eee1890e2d8e33dee102f402ea5a1a986f11888aa4"} Dec 11 08:50:52 crc kubenswrapper[4992]: I1211 08:50:52.136073 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjhxz" event={"ID":"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9","Type":"ContainerStarted","Data":"f9893f791aaa481e0ea6d4c5410df82f9fd4aae5ac9992417fe98c62984dec2f"} Dec 11 08:50:53 crc kubenswrapper[4992]: I1211 08:50:53.147252 4992 generic.go:334] "Generic (PLEG): container finished" podID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerID="f9893f791aaa481e0ea6d4c5410df82f9fd4aae5ac9992417fe98c62984dec2f" exitCode=0 Dec 11 08:50:53 crc kubenswrapper[4992]: I1211 08:50:53.147344 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjhxz" event={"ID":"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9","Type":"ContainerDied","Data":"f9893f791aaa481e0ea6d4c5410df82f9fd4aae5ac9992417fe98c62984dec2f"} Dec 11 08:50:55 crc kubenswrapper[4992]: I1211 08:50:55.172205 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjhxz" event={"ID":"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9","Type":"ContainerStarted","Data":"67346b38a7e391abb6f5084d31b66adcbe05468960fdb1554e30dfd97fdaebac"} Dec 11 08:50:55 crc kubenswrapper[4992]: I1211 08:50:55.194747 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cjhxz" podStartSLOduration=2.68476696 podStartE2EDuration="7.194725632s" podCreationTimestamp="2025-12-11 08:50:48 +0000 UTC" firstStartedPulling="2025-12-11 08:50:50.11858047 +0000 UTC m=+1674.378054416" lastFinishedPulling="2025-12-11 08:50:54.628539162 +0000 UTC m=+1678.888013088" observedRunningTime="2025-12-11 08:50:55.191197706 +0000 UTC m=+1679.450671632" watchObservedRunningTime="2025-12-11 08:50:55.194725632 +0000 UTC m=+1679.454199558" Dec 11 08:50:56 crc kubenswrapper[4992]: I1211 08:50:56.106499 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:50:56 crc kubenswrapper[4992]: E1211 08:50:56.107018 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:50:59 crc kubenswrapper[4992]: I1211 08:50:59.239123 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:59 crc kubenswrapper[4992]: I1211 08:50:59.239402 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:50:59 crc kubenswrapper[4992]: I1211 08:50:59.284686 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:51:00 crc kubenswrapper[4992]: I1211 08:51:00.264941 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:51:00 crc kubenswrapper[4992]: I1211 08:51:00.319752 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjhxz"] Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.230801 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cjhxz" podUID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerName="registry-server" containerID="cri-o://67346b38a7e391abb6f5084d31b66adcbe05468960fdb1554e30dfd97fdaebac" gracePeriod=2 Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.506446 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9ghw"] Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.509039 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.518282 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9ghw"] Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.633387 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-utilities\") pod \"community-operators-p9ghw\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.633496 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scvtb\" (UniqueName: \"kubernetes.io/projected/a0f25964-d793-4487-b3bc-1a33fe80ec9a-kube-api-access-scvtb\") pod \"community-operators-p9ghw\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.634120 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-catalog-content\") pod \"community-operators-p9ghw\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.736706 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scvtb\" (UniqueName: \"kubernetes.io/projected/a0f25964-d793-4487-b3bc-1a33fe80ec9a-kube-api-access-scvtb\") pod \"community-operators-p9ghw\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.736789 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-catalog-content\") pod \"community-operators-p9ghw\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.736919 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-utilities\") pod \"community-operators-p9ghw\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.737384 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-catalog-content\") pod \"community-operators-p9ghw\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.737431 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-utilities\") pod \"community-operators-p9ghw\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.759044 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scvtb\" (UniqueName: \"kubernetes.io/projected/a0f25964-d793-4487-b3bc-1a33fe80ec9a-kube-api-access-scvtb\") pod \"community-operators-p9ghw\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:02 crc kubenswrapper[4992]: I1211 08:51:02.842735 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.258822 4992 generic.go:334] "Generic (PLEG): container finished" podID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerID="67346b38a7e391abb6f5084d31b66adcbe05468960fdb1554e30dfd97fdaebac" exitCode=0 Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.259467 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjhxz" event={"ID":"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9","Type":"ContainerDied","Data":"67346b38a7e391abb6f5084d31b66adcbe05468960fdb1554e30dfd97fdaebac"} Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.408179 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9ghw"] Dec 11 08:51:03 crc kubenswrapper[4992]: W1211 08:51:03.416538 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0f25964_d793_4487_b3bc_1a33fe80ec9a.slice/crio-934765147c89e963f8dbc165d2eb37c802d121091e88b971a43eb12936563c2f WatchSource:0}: Error finding container 934765147c89e963f8dbc165d2eb37c802d121091e88b971a43eb12936563c2f: Status 404 returned error can't find the container with id 934765147c89e963f8dbc165d2eb37c802d121091e88b971a43eb12936563c2f Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.638098 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.768262 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-utilities\") pod \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.768581 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jrjp\" (UniqueName: \"kubernetes.io/projected/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-kube-api-access-2jrjp\") pod \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.768673 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-catalog-content\") pod \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\" (UID: \"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9\") " Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.769327 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-utilities" (OuterVolumeSpecName: "utilities") pod "9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" (UID: "9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.777174 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-kube-api-access-2jrjp" (OuterVolumeSpecName: "kube-api-access-2jrjp") pod "9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" (UID: "9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9"). InnerVolumeSpecName "kube-api-access-2jrjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.820889 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" (UID: "9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.870870 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jrjp\" (UniqueName: \"kubernetes.io/projected/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-kube-api-access-2jrjp\") on node \"crc\" DevicePath \"\"" Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.870907 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:51:03 crc kubenswrapper[4992]: I1211 08:51:03.870916 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.272987 4992 generic.go:334] "Generic (PLEG): container finished" podID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerID="cd2ce0b28e7c02dd95262e56926aedcc4c6b7286cea59378ff7de38e77e42d37" exitCode=0 Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.273069 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9ghw" event={"ID":"a0f25964-d793-4487-b3bc-1a33fe80ec9a","Type":"ContainerDied","Data":"cd2ce0b28e7c02dd95262e56926aedcc4c6b7286cea59378ff7de38e77e42d37"} Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.273104 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9ghw" event={"ID":"a0f25964-d793-4487-b3bc-1a33fe80ec9a","Type":"ContainerStarted","Data":"934765147c89e963f8dbc165d2eb37c802d121091e88b971a43eb12936563c2f"} Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.276589 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cjhxz" event={"ID":"9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9","Type":"ContainerDied","Data":"f26e4ff537ce4e8b96aa20eee1890e2d8e33dee102f402ea5a1a986f11888aa4"} Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.276729 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cjhxz" Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.276762 4992 scope.go:117] "RemoveContainer" containerID="67346b38a7e391abb6f5084d31b66adcbe05468960fdb1554e30dfd97fdaebac" Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.304615 4992 scope.go:117] "RemoveContainer" containerID="f9893f791aaa481e0ea6d4c5410df82f9fd4aae5ac9992417fe98c62984dec2f" Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.325069 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cjhxz"] Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.334010 4992 scope.go:117] "RemoveContainer" containerID="6b21a34a491759f6e049e2faea5b6913a7361e51f547642e9eaa7592d13987fe" Dec 11 08:51:04 crc kubenswrapper[4992]: I1211 08:51:04.335712 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cjhxz"] Dec 11 08:51:06 crc kubenswrapper[4992]: I1211 08:51:06.106729 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" path="/var/lib/kubelet/pods/9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9/volumes" Dec 11 08:51:09 crc kubenswrapper[4992]: I1211 08:51:09.094907 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:51:09 crc kubenswrapper[4992]: E1211 08:51:09.095595 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:51:11 crc kubenswrapper[4992]: I1211 08:51:11.337357 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9ghw" event={"ID":"a0f25964-d793-4487-b3bc-1a33fe80ec9a","Type":"ContainerStarted","Data":"e81f152061cc0f29175b4efcdf1e0788dff17c00337dd28658d5e5dcfe33d29b"} Dec 11 08:51:12 crc kubenswrapper[4992]: I1211 08:51:12.349908 4992 generic.go:334] "Generic (PLEG): container finished" podID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerID="e81f152061cc0f29175b4efcdf1e0788dff17c00337dd28658d5e5dcfe33d29b" exitCode=0 Dec 11 08:51:12 crc kubenswrapper[4992]: I1211 08:51:12.350031 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9ghw" event={"ID":"a0f25964-d793-4487-b3bc-1a33fe80ec9a","Type":"ContainerDied","Data":"e81f152061cc0f29175b4efcdf1e0788dff17c00337dd28658d5e5dcfe33d29b"} Dec 11 08:51:14 crc kubenswrapper[4992]: I1211 08:51:14.368653 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9ghw" event={"ID":"a0f25964-d793-4487-b3bc-1a33fe80ec9a","Type":"ContainerStarted","Data":"fd0d168712e5282abb8f2c9d42ba6506630cda998260783f34d709fc2eb291dd"} Dec 11 08:51:14 crc kubenswrapper[4992]: I1211 08:51:14.385814 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9ghw" podStartSLOduration=3.45320452 podStartE2EDuration="12.38579399s" podCreationTimestamp="2025-12-11 08:51:02 +0000 UTC" firstStartedPulling="2025-12-11 08:51:04.275114144 +0000 UTC m=+1688.534588070" lastFinishedPulling="2025-12-11 08:51:13.207703614 +0000 UTC m=+1697.467177540" observedRunningTime="2025-12-11 08:51:14.384756505 +0000 UTC m=+1698.644230461" watchObservedRunningTime="2025-12-11 08:51:14.38579399 +0000 UTC m=+1698.645267916" Dec 11 08:51:22 crc kubenswrapper[4992]: I1211 08:51:22.095908 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:51:22 crc kubenswrapper[4992]: E1211 08:51:22.097136 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:51:22 crc kubenswrapper[4992]: I1211 08:51:22.843832 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:22 crc kubenswrapper[4992]: I1211 08:51:22.845893 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:22 crc kubenswrapper[4992]: I1211 08:51:22.899070 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:23 crc kubenswrapper[4992]: I1211 08:51:23.493067 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:23 crc kubenswrapper[4992]: I1211 08:51:23.537008 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9ghw"] Dec 11 08:51:25 crc kubenswrapper[4992]: I1211 08:51:25.468879 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p9ghw" podUID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerName="registry-server" containerID="cri-o://fd0d168712e5282abb8f2c9d42ba6506630cda998260783f34d709fc2eb291dd" gracePeriod=2 Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.480001 4992 generic.go:334] "Generic (PLEG): container finished" podID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerID="fd0d168712e5282abb8f2c9d42ba6506630cda998260783f34d709fc2eb291dd" exitCode=0 Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.480158 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9ghw" event={"ID":"a0f25964-d793-4487-b3bc-1a33fe80ec9a","Type":"ContainerDied","Data":"fd0d168712e5282abb8f2c9d42ba6506630cda998260783f34d709fc2eb291dd"} Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.620603 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.707811 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-utilities\") pod \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.707889 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-catalog-content\") pod \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.708032 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scvtb\" (UniqueName: \"kubernetes.io/projected/a0f25964-d793-4487-b3bc-1a33fe80ec9a-kube-api-access-scvtb\") pod \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\" (UID: \"a0f25964-d793-4487-b3bc-1a33fe80ec9a\") " Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.708981 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-utilities" (OuterVolumeSpecName: "utilities") pod "a0f25964-d793-4487-b3bc-1a33fe80ec9a" (UID: "a0f25964-d793-4487-b3bc-1a33fe80ec9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.709432 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.713903 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f25964-d793-4487-b3bc-1a33fe80ec9a-kube-api-access-scvtb" (OuterVolumeSpecName: "kube-api-access-scvtb") pod "a0f25964-d793-4487-b3bc-1a33fe80ec9a" (UID: "a0f25964-d793-4487-b3bc-1a33fe80ec9a"). InnerVolumeSpecName "kube-api-access-scvtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.765069 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0f25964-d793-4487-b3bc-1a33fe80ec9a" (UID: "a0f25964-d793-4487-b3bc-1a33fe80ec9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.810855 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0f25964-d793-4487-b3bc-1a33fe80ec9a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:51:26 crc kubenswrapper[4992]: I1211 08:51:26.810891 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scvtb\" (UniqueName: \"kubernetes.io/projected/a0f25964-d793-4487-b3bc-1a33fe80ec9a-kube-api-access-scvtb\") on node \"crc\" DevicePath \"\"" Dec 11 08:51:27 crc kubenswrapper[4992]: I1211 08:51:27.492716 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9ghw" event={"ID":"a0f25964-d793-4487-b3bc-1a33fe80ec9a","Type":"ContainerDied","Data":"934765147c89e963f8dbc165d2eb37c802d121091e88b971a43eb12936563c2f"} Dec 11 08:51:27 crc kubenswrapper[4992]: I1211 08:51:27.492781 4992 scope.go:117] "RemoveContainer" containerID="fd0d168712e5282abb8f2c9d42ba6506630cda998260783f34d709fc2eb291dd" Dec 11 08:51:27 crc kubenswrapper[4992]: I1211 08:51:27.492969 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9ghw" Dec 11 08:51:27 crc kubenswrapper[4992]: I1211 08:51:27.529111 4992 scope.go:117] "RemoveContainer" containerID="e81f152061cc0f29175b4efcdf1e0788dff17c00337dd28658d5e5dcfe33d29b" Dec 11 08:51:27 crc kubenswrapper[4992]: I1211 08:51:27.531823 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p9ghw"] Dec 11 08:51:27 crc kubenswrapper[4992]: I1211 08:51:27.553854 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p9ghw"] Dec 11 08:51:27 crc kubenswrapper[4992]: I1211 08:51:27.567114 4992 scope.go:117] "RemoveContainer" containerID="cd2ce0b28e7c02dd95262e56926aedcc4c6b7286cea59378ff7de38e77e42d37" Dec 11 08:51:28 crc kubenswrapper[4992]: I1211 08:51:28.106546 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" path="/var/lib/kubelet/pods/a0f25964-d793-4487-b3bc-1a33fe80ec9a/volumes" Dec 11 08:51:36 crc kubenswrapper[4992]: I1211 08:51:36.111012 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:51:36 crc kubenswrapper[4992]: E1211 08:51:36.111890 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:51:40 crc kubenswrapper[4992]: I1211 08:51:40.045734 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6drxf"] Dec 11 08:51:40 crc kubenswrapper[4992]: I1211 08:51:40.056514 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fc5b-account-create-update-jjhh4"] Dec 11 08:51:40 crc kubenswrapper[4992]: I1211 08:51:40.064059 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-jbtss"] Dec 11 08:51:40 crc kubenswrapper[4992]: I1211 08:51:40.072408 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6drxf"] Dec 11 08:51:40 crc kubenswrapper[4992]: I1211 08:51:40.081337 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fc5b-account-create-update-jjhh4"] Dec 11 08:51:40 crc kubenswrapper[4992]: I1211 08:51:40.090500 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-jbtss"] Dec 11 08:51:40 crc kubenswrapper[4992]: I1211 08:51:40.105894 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="094dc08d-1b1c-4481-8ab4-127dc18a6b01" path="/var/lib/kubelet/pods/094dc08d-1b1c-4481-8ab4-127dc18a6b01/volumes" Dec 11 08:51:40 crc kubenswrapper[4992]: I1211 08:51:40.106528 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7979a378-800d-4749-bc4b-98f4c85b2624" path="/var/lib/kubelet/pods/7979a378-800d-4749-bc4b-98f4c85b2624/volumes" Dec 11 08:51:40 crc kubenswrapper[4992]: I1211 08:51:40.107131 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d56e1b-849a-4c3e-b588-fb052a8bfb46" path="/var/lib/kubelet/pods/c3d56e1b-849a-4c3e-b588-fb052a8bfb46/volumes" Dec 11 08:51:43 crc kubenswrapper[4992]: I1211 08:51:43.027223 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lxx2h"] Dec 11 08:51:43 crc kubenswrapper[4992]: I1211 08:51:43.037072 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-35b4-account-create-update-hxsnq"] Dec 11 08:51:43 crc kubenswrapper[4992]: I1211 08:51:43.045680 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lxx2h"] Dec 11 08:51:43 crc kubenswrapper[4992]: I1211 08:51:43.059322 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b044-account-create-update-d7xvv"] Dec 11 08:51:43 crc kubenswrapper[4992]: I1211 08:51:43.070094 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b044-account-create-update-d7xvv"] Dec 11 08:51:43 crc kubenswrapper[4992]: I1211 08:51:43.080924 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-35b4-account-create-update-hxsnq"] Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.028965 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-v24w2"] Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.040389 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4mn8r"] Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.051017 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6c6e-account-create-update-77t4h"] Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.058996 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4466-account-create-update-ml5rl"] Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.066944 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-v24w2"] Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.074648 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4mn8r"] Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.083244 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6c6e-account-create-update-77t4h"] Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.091740 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4466-account-create-update-ml5rl"] Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.105981 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c61d47b-62bb-48b1-83ae-1aa375e422cd" path="/var/lib/kubelet/pods/2c61d47b-62bb-48b1-83ae-1aa375e422cd/volumes" Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.106584 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32320aa4-106e-4dee-89f2-e5034dde3022" path="/var/lib/kubelet/pods/32320aa4-106e-4dee-89f2-e5034dde3022/volumes" Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.107221 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a2c64c-7a55-413d-ab90-c79bf73b9951" path="/var/lib/kubelet/pods/33a2c64c-7a55-413d-ab90-c79bf73b9951/volumes" Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.108025 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa3c025-0da1-4697-bba1-57cb62d804e5" path="/var/lib/kubelet/pods/4aa3c025-0da1-4697-bba1-57cb62d804e5/volumes" Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.109616 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59aaff84-4003-4bf1-ba6b-c1dbadc40702" path="/var/lib/kubelet/pods/59aaff84-4003-4bf1-ba6b-c1dbadc40702/volumes" Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.111296 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2a216d-05f7-4862-9e26-3e7ca9c6b56c" path="/var/lib/kubelet/pods/ab2a216d-05f7-4862-9e26-3e7ca9c6b56c/volumes" Dec 11 08:51:44 crc kubenswrapper[4992]: I1211 08:51:44.112034 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff393885-17ba-4d2f-a3ea-b74533842367" path="/var/lib/kubelet/pods/ff393885-17ba-4d2f-a3ea-b74533842367/volumes" Dec 11 08:51:48 crc kubenswrapper[4992]: I1211 08:51:48.053843 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-8deb-account-create-update-kqqw2"] Dec 11 08:51:48 crc kubenswrapper[4992]: I1211 08:51:48.063726 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-8deb-account-create-update-kqqw2"] Dec 11 08:51:48 crc kubenswrapper[4992]: I1211 08:51:48.107131 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f6555c6-b872-4fde-be87-155133d67f13" path="/var/lib/kubelet/pods/7f6555c6-b872-4fde-be87-155133d67f13/volumes" Dec 11 08:51:49 crc kubenswrapper[4992]: I1211 08:51:49.036614 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ssqrt"] Dec 11 08:51:49 crc kubenswrapper[4992]: I1211 08:51:49.050506 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ssqrt"] Dec 11 08:51:49 crc kubenswrapper[4992]: I1211 08:51:49.095286 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:51:49 crc kubenswrapper[4992]: E1211 08:51:49.095595 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:51:50 crc kubenswrapper[4992]: I1211 08:51:50.106456 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e82505-129d-467c-8224-c36229c2da21" path="/var/lib/kubelet/pods/96e82505-129d-467c-8224-c36229c2da21/volumes" Dec 11 08:52:03 crc kubenswrapper[4992]: I1211 08:52:03.095536 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:52:03 crc kubenswrapper[4992]: E1211 08:52:03.096495 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:52:09 crc kubenswrapper[4992]: I1211 08:52:09.888412 4992 scope.go:117] "RemoveContainer" containerID="bf1b6e43f267257a954f9c6ca1c55078d8b53f617af9e0f67859d0fc0a9b408b" Dec 11 08:52:09 crc kubenswrapper[4992]: I1211 08:52:09.912973 4992 scope.go:117] "RemoveContainer" containerID="7c50a99b24522578ce9c4f576971a63cd3838bec331c7d3bc1a5aa84b6ffc5ad" Dec 11 08:52:09 crc kubenswrapper[4992]: I1211 08:52:09.958276 4992 scope.go:117] "RemoveContainer" containerID="d0acfb3a8972eada88bed129ea56fbb0e97b97f25fa695fdab18afa6d6edc59f" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.006843 4992 scope.go:117] "RemoveContainer" containerID="51110a2005ba0d8f37798fbe9e79f67a055f99db12ea65dc5ce346dd00afc4dc" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.057588 4992 scope.go:117] "RemoveContainer" containerID="fb5cc07d3544674972fd36edcf1e0b877dd67b53335732909f5de3173a102cc6" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.102069 4992 scope.go:117] "RemoveContainer" containerID="8d4af0619a669f75471429717a489568fd9be95d1f2e35e4641b967b2b4f8218" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.137162 4992 scope.go:117] "RemoveContainer" containerID="55418836b53fcadc757c12fd247d86a1754aa63e8646a2c326dc649fee032aea" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.193015 4992 scope.go:117] "RemoveContainer" containerID="62433edbe0157e262b502417ef23a173380e9ce0a633628fd4658c68107c6d38" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.216193 4992 scope.go:117] "RemoveContainer" containerID="8bd179f404363500f65e78de070e89d64d789eba0c48237d92df54ae2f59999e" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.241145 4992 scope.go:117] "RemoveContainer" containerID="06e41b92bab7ec56c330be8407c64e6b429869492882633361da98960df722ec" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.261298 4992 scope.go:117] "RemoveContainer" containerID="1d80ba546d99677c414f332750877d6c6d368e8737e0bb6278b3c9fbe33fe4e1" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.285242 4992 scope.go:117] "RemoveContainer" containerID="ab7ec2ba9fc648b014a801ed6425f22145cff02acbca3a012c445200dcb98044" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.308395 4992 scope.go:117] "RemoveContainer" containerID="4aafc1904c8dfbc12f0e49b3def15fd0e5d20b6addc1c1f512c58e7af1fbeb14" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.330093 4992 scope.go:117] "RemoveContainer" containerID="8edb39bda4d87d3a2b6872bd08e9fbe8708057efe489ec90401b1daba8f4cadf" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.349590 4992 scope.go:117] "RemoveContainer" containerID="84f72e32e4a1536c03b6e45dd1265c587a75876ddad6a4fe80a697dec16a67ff" Dec 11 08:52:10 crc kubenswrapper[4992]: I1211 08:52:10.369658 4992 scope.go:117] "RemoveContainer" containerID="e79f58df599ab8db845363e3e3c6dd53fe4bd52b35c34fd7bff318a703d2a05a" Dec 11 08:52:17 crc kubenswrapper[4992]: I1211 08:52:17.096149 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:52:17 crc kubenswrapper[4992]: E1211 08:52:17.097366 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:52:29 crc kubenswrapper[4992]: I1211 08:52:29.095554 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:52:29 crc kubenswrapper[4992]: E1211 08:52:29.096474 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:52:33 crc kubenswrapper[4992]: I1211 08:52:33.052377 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rsks9"] Dec 11 08:52:33 crc kubenswrapper[4992]: I1211 08:52:33.064906 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rsks9"] Dec 11 08:52:34 crc kubenswrapper[4992]: I1211 08:52:34.119613 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df9e9336-fd88-46fa-9a9e-2533e27df0ed" path="/var/lib/kubelet/pods/df9e9336-fd88-46fa-9a9e-2533e27df0ed/volumes" Dec 11 08:52:38 crc kubenswrapper[4992]: I1211 08:52:38.147966 4992 generic.go:334] "Generic (PLEG): container finished" podID="6066a587-fdc9-4ae8-ad82-4ddf1844f9e6" containerID="64bbb46f78de345351d85b260130f775dbaf065cbea2cbb43732bad237a1146e" exitCode=0 Dec 11 08:52:38 crc kubenswrapper[4992]: I1211 08:52:38.148060 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" event={"ID":"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6","Type":"ContainerDied","Data":"64bbb46f78de345351d85b260130f775dbaf065cbea2cbb43732bad237a1146e"} Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.707472 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.860528 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-ssh-key\") pod \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.860930 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-inventory\") pod \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.861063 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k7tq\" (UniqueName: \"kubernetes.io/projected/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-kube-api-access-7k7tq\") pod \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.861281 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-bootstrap-combined-ca-bundle\") pod \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.866311 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-kube-api-access-7k7tq" (OuterVolumeSpecName: "kube-api-access-7k7tq") pod "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6" (UID: "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6"). InnerVolumeSpecName "kube-api-access-7k7tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.869175 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6" (UID: "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:52:39 crc kubenswrapper[4992]: E1211 08:52:39.885222 4992 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-inventory podName:6066a587-fdc9-4ae8-ad82-4ddf1844f9e6 nodeName:}" failed. No retries permitted until 2025-12-11 08:52:40.385193038 +0000 UTC m=+1784.644666964 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-inventory") pod "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6" (UID: "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6") : error deleting /var/lib/kubelet/pods/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6/volume-subpaths: remove /var/lib/kubelet/pods/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6/volume-subpaths: no such file or directory Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.887809 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6" (UID: "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.964028 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k7tq\" (UniqueName: \"kubernetes.io/projected/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-kube-api-access-7k7tq\") on node \"crc\" DevicePath \"\"" Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.964073 4992 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:52:39 crc kubenswrapper[4992]: I1211 08:52:39.964084 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.167158 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" event={"ID":"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6","Type":"ContainerDied","Data":"8c7bbc2fe4dac9825e6d3142bfb29a75b16047473695293c69fd7d47260b305e"} Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.167232 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c7bbc2fe4dac9825e6d3142bfb29a75b16047473695293c69fd7d47260b305e" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.167207 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.243246 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc"] Dec 11 08:52:40 crc kubenswrapper[4992]: E1211 08:52:40.243683 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerName="extract-utilities" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.243704 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerName="extract-utilities" Dec 11 08:52:40 crc kubenswrapper[4992]: E1211 08:52:40.243713 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerName="extract-utilities" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.243719 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerName="extract-utilities" Dec 11 08:52:40 crc kubenswrapper[4992]: E1211 08:52:40.243751 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerName="registry-server" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.243758 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerName="registry-server" Dec 11 08:52:40 crc kubenswrapper[4992]: E1211 08:52:40.243769 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerName="registry-server" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.243775 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerName="registry-server" Dec 11 08:52:40 crc kubenswrapper[4992]: E1211 08:52:40.243788 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6066a587-fdc9-4ae8-ad82-4ddf1844f9e6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.243795 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6066a587-fdc9-4ae8-ad82-4ddf1844f9e6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 11 08:52:40 crc kubenswrapper[4992]: E1211 08:52:40.243808 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerName="extract-content" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.243813 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerName="extract-content" Dec 11 08:52:40 crc kubenswrapper[4992]: E1211 08:52:40.243824 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerName="extract-content" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.243830 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerName="extract-content" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.244004 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="9da83bd4-2bcd-4c8e-9a2b-adb256aef9d9" containerName="registry-server" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.244033 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f25964-d793-4487-b3bc-1a33fe80ec9a" containerName="registry-server" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.244052 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6066a587-fdc9-4ae8-ad82-4ddf1844f9e6" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.244709 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.261148 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc"] Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.370133 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjk6\" (UniqueName: \"kubernetes.io/projected/6af1f9e3-e349-40a7-8985-f114c1c808b3-kube-api-access-jvjk6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bswcc\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.370197 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bswcc\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.370390 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bswcc\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.471507 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-inventory\") pod \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\" (UID: \"6066a587-fdc9-4ae8-ad82-4ddf1844f9e6\") " Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.471969 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjk6\" (UniqueName: \"kubernetes.io/projected/6af1f9e3-e349-40a7-8985-f114c1c808b3-kube-api-access-jvjk6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bswcc\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.472000 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bswcc\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.472037 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bswcc\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.475178 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-inventory" (OuterVolumeSpecName: "inventory") pod "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6" (UID: "6066a587-fdc9-4ae8-ad82-4ddf1844f9e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.475532 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bswcc\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.476114 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bswcc\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.488681 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjk6\" (UniqueName: \"kubernetes.io/projected/6af1f9e3-e349-40a7-8985-f114c1c808b3-kube-api-access-jvjk6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bswcc\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.563376 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:52:40 crc kubenswrapper[4992]: I1211 08:52:40.574054 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6066a587-fdc9-4ae8-ad82-4ddf1844f9e6-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:52:41 crc kubenswrapper[4992]: I1211 08:52:41.115846 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc"] Dec 11 08:52:41 crc kubenswrapper[4992]: I1211 08:52:41.180031 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" event={"ID":"6af1f9e3-e349-40a7-8985-f114c1c808b3","Type":"ContainerStarted","Data":"df63181f921fffdbc1e422b243aa61a1bb1fd8b576eb741171dbfbe492601af6"} Dec 11 08:52:42 crc kubenswrapper[4992]: I1211 08:52:42.198836 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" event={"ID":"6af1f9e3-e349-40a7-8985-f114c1c808b3","Type":"ContainerStarted","Data":"576023ae96126ebe4bf8cfa00fc79ee1a1dc6d8bc617cc304fe49784023b1b7b"} Dec 11 08:52:42 crc kubenswrapper[4992]: I1211 08:52:42.242181 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" podStartSLOduration=1.582819964 podStartE2EDuration="2.242155141s" podCreationTimestamp="2025-12-11 08:52:40 +0000 UTC" firstStartedPulling="2025-12-11 08:52:41.121657772 +0000 UTC m=+1785.381131698" lastFinishedPulling="2025-12-11 08:52:41.780992949 +0000 UTC m=+1786.040466875" observedRunningTime="2025-12-11 08:52:42.217398177 +0000 UTC m=+1786.476872133" watchObservedRunningTime="2025-12-11 08:52:42.242155141 +0000 UTC m=+1786.501629067" Dec 11 08:52:43 crc kubenswrapper[4992]: I1211 08:52:43.095494 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:52:43 crc kubenswrapper[4992]: E1211 08:52:43.095937 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:52:55 crc kubenswrapper[4992]: I1211 08:52:55.096126 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:52:55 crc kubenswrapper[4992]: E1211 08:52:55.096871 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:53:06 crc kubenswrapper[4992]: I1211 08:53:06.108301 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:53:06 crc kubenswrapper[4992]: E1211 08:53:06.109533 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:53:08 crc kubenswrapper[4992]: I1211 08:53:08.041877 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-fx6rx"] Dec 11 08:53:08 crc kubenswrapper[4992]: I1211 08:53:08.050572 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-fx6rx"] Dec 11 08:53:08 crc kubenswrapper[4992]: I1211 08:53:08.108562 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19dbc853-9df9-491e-af8d-6c13547cd478" path="/var/lib/kubelet/pods/19dbc853-9df9-491e-af8d-6c13547cd478/volumes" Dec 11 08:53:10 crc kubenswrapper[4992]: I1211 08:53:10.687166 4992 scope.go:117] "RemoveContainer" containerID="1f2495e5635b1c355572f3076f2c0d2d2b6b241a128664c1306dc817335d446c" Dec 11 08:53:10 crc kubenswrapper[4992]: I1211 08:53:10.723682 4992 scope.go:117] "RemoveContainer" containerID="69c80914ff00ec3ea30b5a00fa3ead0f2b2b214b18ea3b2ac18b6eb428935f52" Dec 11 08:53:10 crc kubenswrapper[4992]: I1211 08:53:10.790476 4992 scope.go:117] "RemoveContainer" containerID="bec3e547db1d2714df96d3b7f8490f9777a225f5620875807246aab96a61b628" Dec 11 08:53:10 crc kubenswrapper[4992]: I1211 08:53:10.848928 4992 scope.go:117] "RemoveContainer" containerID="dbcd517132b6a422470be54215f7a2abb11139786ef457ca137fdd9fa2d34001" Dec 11 08:53:10 crc kubenswrapper[4992]: I1211 08:53:10.878336 4992 scope.go:117] "RemoveContainer" containerID="6a9fc6abaead35ec9384932e100c7af9b6e32b797b7afe3943fd2d6d2403bf7a" Dec 11 08:53:10 crc kubenswrapper[4992]: I1211 08:53:10.897422 4992 scope.go:117] "RemoveContainer" containerID="92180b1fe174c1b32678a1a5b282f097afedc611e34798df628dec1c6af4c619" Dec 11 08:53:19 crc kubenswrapper[4992]: I1211 08:53:19.095009 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:53:19 crc kubenswrapper[4992]: E1211 08:53:19.095894 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:53:32 crc kubenswrapper[4992]: I1211 08:53:32.094843 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:53:32 crc kubenswrapper[4992]: E1211 08:53:32.095668 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:53:45 crc kubenswrapper[4992]: I1211 08:53:45.094870 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:53:45 crc kubenswrapper[4992]: E1211 08:53:45.095704 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:53:47 crc kubenswrapper[4992]: I1211 08:53:47.044130 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-smb2h"] Dec 11 08:53:47 crc kubenswrapper[4992]: I1211 08:53:47.055836 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-smb2h"] Dec 11 08:53:48 crc kubenswrapper[4992]: I1211 08:53:48.108817 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6952512b-7da5-4bc5-b91f-bbeb61056854" path="/var/lib/kubelet/pods/6952512b-7da5-4bc5-b91f-bbeb61056854/volumes" Dec 11 08:53:59 crc kubenswrapper[4992]: I1211 08:53:59.094907 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:53:59 crc kubenswrapper[4992]: E1211 08:53:59.095875 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:54:10 crc kubenswrapper[4992]: I1211 08:54:10.986502 4992 scope.go:117] "RemoveContainer" containerID="4f96a2f0ff0421ec2021b5ff15e71261702f804f2987ff43bb3f347dabb4dc10" Dec 11 08:54:12 crc kubenswrapper[4992]: I1211 08:54:12.039149 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-tcqkm"] Dec 11 08:54:12 crc kubenswrapper[4992]: I1211 08:54:12.047725 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-tcqkm"] Dec 11 08:54:12 crc kubenswrapper[4992]: I1211 08:54:12.104537 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8231c59-b8e7-4f7d-aeb0-888d579425ac" path="/var/lib/kubelet/pods/f8231c59-b8e7-4f7d-aeb0-888d579425ac/volumes" Dec 11 08:54:13 crc kubenswrapper[4992]: I1211 08:54:13.095564 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:54:13 crc kubenswrapper[4992]: E1211 08:54:13.095880 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:54:20 crc kubenswrapper[4992]: I1211 08:54:20.031652 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-v95pn"] Dec 11 08:54:20 crc kubenswrapper[4992]: I1211 08:54:20.042331 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-v95pn"] Dec 11 08:54:20 crc kubenswrapper[4992]: I1211 08:54:20.104683 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b" path="/var/lib/kubelet/pods/afb50d3f-18a0-4dcd-8f4b-9dfa94f2a31b/volumes" Dec 11 08:54:27 crc kubenswrapper[4992]: I1211 08:54:27.036195 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-thwnm"] Dec 11 08:54:27 crc kubenswrapper[4992]: I1211 08:54:27.046225 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-thwnm"] Dec 11 08:54:27 crc kubenswrapper[4992]: I1211 08:54:27.094894 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:54:27 crc kubenswrapper[4992]: E1211 08:54:27.095408 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 08:54:28 crc kubenswrapper[4992]: I1211 08:54:28.034933 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-8mmcr"] Dec 11 08:54:28 crc kubenswrapper[4992]: I1211 08:54:28.045668 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-8mmcr"] Dec 11 08:54:28 crc kubenswrapper[4992]: I1211 08:54:28.119087 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="429dae0d-117e-4943-966e-11460f9676b7" path="/var/lib/kubelet/pods/429dae0d-117e-4943-966e-11460f9676b7/volumes" Dec 11 08:54:28 crc kubenswrapper[4992]: I1211 08:54:28.119764 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c99101-825a-4a3b-acf0-7fc522f3631f" path="/var/lib/kubelet/pods/73c99101-825a-4a3b-acf0-7fc522f3631f/volumes" Dec 11 08:54:29 crc kubenswrapper[4992]: I1211 08:54:29.390237 4992 generic.go:334] "Generic (PLEG): container finished" podID="6af1f9e3-e349-40a7-8985-f114c1c808b3" containerID="576023ae96126ebe4bf8cfa00fc79ee1a1dc6d8bc617cc304fe49784023b1b7b" exitCode=0 Dec 11 08:54:29 crc kubenswrapper[4992]: I1211 08:54:29.390365 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" event={"ID":"6af1f9e3-e349-40a7-8985-f114c1c808b3","Type":"ContainerDied","Data":"576023ae96126ebe4bf8cfa00fc79ee1a1dc6d8bc617cc304fe49784023b1b7b"} Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.840708 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.858596 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvjk6\" (UniqueName: \"kubernetes.io/projected/6af1f9e3-e349-40a7-8985-f114c1c808b3-kube-api-access-jvjk6\") pod \"6af1f9e3-e349-40a7-8985-f114c1c808b3\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.858671 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-inventory\") pod \"6af1f9e3-e349-40a7-8985-f114c1c808b3\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.858934 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-ssh-key\") pod \"6af1f9e3-e349-40a7-8985-f114c1c808b3\" (UID: \"6af1f9e3-e349-40a7-8985-f114c1c808b3\") " Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.865927 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af1f9e3-e349-40a7-8985-f114c1c808b3-kube-api-access-jvjk6" (OuterVolumeSpecName: "kube-api-access-jvjk6") pod "6af1f9e3-e349-40a7-8985-f114c1c808b3" (UID: "6af1f9e3-e349-40a7-8985-f114c1c808b3"). InnerVolumeSpecName "kube-api-access-jvjk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.890885 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6af1f9e3-e349-40a7-8985-f114c1c808b3" (UID: "6af1f9e3-e349-40a7-8985-f114c1c808b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.892777 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-inventory" (OuterVolumeSpecName: "inventory") pod "6af1f9e3-e349-40a7-8985-f114c1c808b3" (UID: "6af1f9e3-e349-40a7-8985-f114c1c808b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.961656 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.961700 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvjk6\" (UniqueName: \"kubernetes.io/projected/6af1f9e3-e349-40a7-8985-f114c1c808b3-kube-api-access-jvjk6\") on node \"crc\" DevicePath \"\"" Dec 11 08:54:30 crc kubenswrapper[4992]: I1211 08:54:30.961715 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6af1f9e3-e349-40a7-8985-f114c1c808b3-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.410425 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" event={"ID":"6af1f9e3-e349-40a7-8985-f114c1c808b3","Type":"ContainerDied","Data":"df63181f921fffdbc1e422b243aa61a1bb1fd8b576eb741171dbfbe492601af6"} Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.410832 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df63181f921fffdbc1e422b243aa61a1bb1fd8b576eb741171dbfbe492601af6" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.410476 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bswcc" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.494191 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq"] Dec 11 08:54:31 crc kubenswrapper[4992]: E1211 08:54:31.499181 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af1f9e3-e349-40a7-8985-f114c1c808b3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.499450 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af1f9e3-e349-40a7-8985-f114c1c808b3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.499787 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af1f9e3-e349-40a7-8985-f114c1c808b3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.500795 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.503148 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.503162 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.503442 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.506648 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.510023 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq"] Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.573387 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-scqpq\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.573656 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-scqpq\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.573938 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghnr2\" (UniqueName: \"kubernetes.io/projected/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-kube-api-access-ghnr2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-scqpq\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.675719 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-scqpq\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.675865 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-scqpq\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.675986 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghnr2\" (UniqueName: \"kubernetes.io/projected/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-kube-api-access-ghnr2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-scqpq\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.681561 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-scqpq\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.682778 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-scqpq\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.709154 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghnr2\" (UniqueName: \"kubernetes.io/projected/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-kube-api-access-ghnr2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-scqpq\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:31 crc kubenswrapper[4992]: I1211 08:54:31.831851 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:54:32 crc kubenswrapper[4992]: I1211 08:54:32.326172 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq"] Dec 11 08:54:32 crc kubenswrapper[4992]: I1211 08:54:32.422767 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" event={"ID":"8f3e0555-dd40-4a68-bcd1-df6fb4be45df","Type":"ContainerStarted","Data":"f49464f533029b7d3cbe90ec09e097ac7071c0bc06d81244f1234e885bdba925"} Dec 11 08:54:33 crc kubenswrapper[4992]: I1211 08:54:33.433308 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" event={"ID":"8f3e0555-dd40-4a68-bcd1-df6fb4be45df","Type":"ContainerStarted","Data":"56ce73314135df59426315ef4340cb981fd7185701a29c76214dc5379fe0518a"} Dec 11 08:54:33 crc kubenswrapper[4992]: I1211 08:54:33.457729 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" podStartSLOduration=1.568695741 podStartE2EDuration="2.457707563s" podCreationTimestamp="2025-12-11 08:54:31 +0000 UTC" firstStartedPulling="2025-12-11 08:54:32.332539388 +0000 UTC m=+1896.592013314" lastFinishedPulling="2025-12-11 08:54:33.22155121 +0000 UTC m=+1897.481025136" observedRunningTime="2025-12-11 08:54:33.45064135 +0000 UTC m=+1897.710115286" watchObservedRunningTime="2025-12-11 08:54:33.457707563 +0000 UTC m=+1897.717181489" Dec 11 08:54:40 crc kubenswrapper[4992]: I1211 08:54:40.095242 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:54:41 crc kubenswrapper[4992]: I1211 08:54:41.500677 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"3a3dc42b5a3cd43c62970b42a2e0157ec46f68f2e53bb7c9edf2af183a976943"} Dec 11 08:55:02 crc kubenswrapper[4992]: I1211 08:55:02.037589 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-x24wj"] Dec 11 08:55:02 crc kubenswrapper[4992]: I1211 08:55:02.047014 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-x24wj"] Dec 11 08:55:02 crc kubenswrapper[4992]: I1211 08:55:02.106317 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88276bfc-171a-4c6d-b2a8-342e9a6f856d" path="/var/lib/kubelet/pods/88276bfc-171a-4c6d-b2a8-342e9a6f856d/volumes" Dec 11 08:55:06 crc kubenswrapper[4992]: I1211 08:55:06.028206 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jqlwz"] Dec 11 08:55:06 crc kubenswrapper[4992]: I1211 08:55:06.037415 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jqlwz"] Dec 11 08:55:06 crc kubenswrapper[4992]: I1211 08:55:06.111869 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953feb5a-bed1-4457-b25e-fa4716bbad75" path="/var/lib/kubelet/pods/953feb5a-bed1-4457-b25e-fa4716bbad75/volumes" Dec 11 08:55:07 crc kubenswrapper[4992]: I1211 08:55:07.036544 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-580c-account-create-update-5jxfz"] Dec 11 08:55:07 crc kubenswrapper[4992]: I1211 08:55:07.050408 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tnwhj"] Dec 11 08:55:07 crc kubenswrapper[4992]: I1211 08:55:07.059971 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-580c-account-create-update-5jxfz"] Dec 11 08:55:07 crc kubenswrapper[4992]: I1211 08:55:07.068283 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tnwhj"] Dec 11 08:55:08 crc kubenswrapper[4992]: I1211 08:55:08.031282 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c59a-account-create-update-db54s"] Dec 11 08:55:08 crc kubenswrapper[4992]: I1211 08:55:08.037442 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c9e3-account-create-update-vxlqn"] Dec 11 08:55:08 crc kubenswrapper[4992]: I1211 08:55:08.046546 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c9e3-account-create-update-vxlqn"] Dec 11 08:55:08 crc kubenswrapper[4992]: I1211 08:55:08.054357 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c59a-account-create-update-db54s"] Dec 11 08:55:08 crc kubenswrapper[4992]: I1211 08:55:08.128531 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b8792cf-dd70-4b14-b007-9e6ed8632bd6" path="/var/lib/kubelet/pods/0b8792cf-dd70-4b14-b007-9e6ed8632bd6/volumes" Dec 11 08:55:08 crc kubenswrapper[4992]: I1211 08:55:08.132726 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da60d29-4843-4e25-804b-a5b89de8f2f2" path="/var/lib/kubelet/pods/1da60d29-4843-4e25-804b-a5b89de8f2f2/volumes" Dec 11 08:55:08 crc kubenswrapper[4992]: I1211 08:55:08.134198 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ba867d-ba4d-4bb1-81aa-b46fa62f43bd" path="/var/lib/kubelet/pods/94ba867d-ba4d-4bb1-81aa-b46fa62f43bd/volumes" Dec 11 08:55:08 crc kubenswrapper[4992]: I1211 08:55:08.135446 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c870d81b-61d3-4eb2-b408-ce51fae0e19f" path="/var/lib/kubelet/pods/c870d81b-61d3-4eb2-b408-ce51fae0e19f/volumes" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.064385 4992 scope.go:117] "RemoveContainer" containerID="32c7226013f1e776450786bea9294c2b1844b1f8af8fd9332996ee055cfbaac0" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.086017 4992 scope.go:117] "RemoveContainer" containerID="ce590d6db83eb7bb2add34306fa84d77454d77c15226f1f7e94828e6912c04da" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.137220 4992 scope.go:117] "RemoveContainer" containerID="5c3a2e0e6d6cfd7220c851fd102ab63a76e1473d65aaa29c5c451f2b747ec2e2" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.179813 4992 scope.go:117] "RemoveContainer" containerID="ae4ff2d1d3811f0275fe7a48683a0f8b28982a4aeedd7083fc8e86a3c3c00288" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.244556 4992 scope.go:117] "RemoveContainer" containerID="3f03e400a7702730c5b7dd5c9dde3e2999049e173690633185b9326c64efa105" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.268787 4992 scope.go:117] "RemoveContainer" containerID="d298afd08c4ad8bf5a226067f04101999cfbec017a9eaac331c44b6366535b49" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.310960 4992 scope.go:117] "RemoveContainer" containerID="a2ddd2ace3bdbaabf11623749930b58193de83e7a692b0fdb1b7bf1c165d487d" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.337481 4992 scope.go:117] "RemoveContainer" containerID="713036c0ac8c668432994f997bd6b71000fd453e2752a8bed220c38b35a448f8" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.358303 4992 scope.go:117] "RemoveContainer" containerID="22d72687af96c39323b29da101171820fbb3d544852bf7b6e45acf5c8555cf8e" Dec 11 08:55:11 crc kubenswrapper[4992]: I1211 08:55:11.397873 4992 scope.go:117] "RemoveContainer" containerID="68e8663a144f38c29f8213a0711f49a3ad8430c222c5d0d7252ccb15fca0fc45" Dec 11 08:55:38 crc kubenswrapper[4992]: I1211 08:55:38.039172 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n56r8"] Dec 11 08:55:38 crc kubenswrapper[4992]: I1211 08:55:38.051114 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n56r8"] Dec 11 08:55:38 crc kubenswrapper[4992]: I1211 08:55:38.105079 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ead296a-d746-4d8b-a8c5-b51c08bf2422" path="/var/lib/kubelet/pods/8ead296a-d746-4d8b-a8c5-b51c08bf2422/volumes" Dec 11 08:55:44 crc kubenswrapper[4992]: I1211 08:55:44.050811 4992 generic.go:334] "Generic (PLEG): container finished" podID="8f3e0555-dd40-4a68-bcd1-df6fb4be45df" containerID="56ce73314135df59426315ef4340cb981fd7185701a29c76214dc5379fe0518a" exitCode=0 Dec 11 08:55:44 crc kubenswrapper[4992]: I1211 08:55:44.051337 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" event={"ID":"8f3e0555-dd40-4a68-bcd1-df6fb4be45df","Type":"ContainerDied","Data":"56ce73314135df59426315ef4340cb981fd7185701a29c76214dc5379fe0518a"} Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.478065 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.622793 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-ssh-key\") pod \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.623161 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghnr2\" (UniqueName: \"kubernetes.io/projected/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-kube-api-access-ghnr2\") pod \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.623326 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-inventory\") pod \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\" (UID: \"8f3e0555-dd40-4a68-bcd1-df6fb4be45df\") " Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.631282 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-kube-api-access-ghnr2" (OuterVolumeSpecName: "kube-api-access-ghnr2") pod "8f3e0555-dd40-4a68-bcd1-df6fb4be45df" (UID: "8f3e0555-dd40-4a68-bcd1-df6fb4be45df"). InnerVolumeSpecName "kube-api-access-ghnr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.655379 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-inventory" (OuterVolumeSpecName: "inventory") pod "8f3e0555-dd40-4a68-bcd1-df6fb4be45df" (UID: "8f3e0555-dd40-4a68-bcd1-df6fb4be45df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.658482 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8f3e0555-dd40-4a68-bcd1-df6fb4be45df" (UID: "8f3e0555-dd40-4a68-bcd1-df6fb4be45df"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.725464 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.725749 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghnr2\" (UniqueName: \"kubernetes.io/projected/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-kube-api-access-ghnr2\") on node \"crc\" DevicePath \"\"" Dec 11 08:55:45 crc kubenswrapper[4992]: I1211 08:55:45.725973 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f3e0555-dd40-4a68-bcd1-df6fb4be45df-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.067865 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" event={"ID":"8f3e0555-dd40-4a68-bcd1-df6fb4be45df","Type":"ContainerDied","Data":"f49464f533029b7d3cbe90ec09e097ac7071c0bc06d81244f1234e885bdba925"} Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.067915 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f49464f533029b7d3cbe90ec09e097ac7071c0bc06d81244f1234e885bdba925" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.067933 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-scqpq" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.170745 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8"] Dec 11 08:55:46 crc kubenswrapper[4992]: E1211 08:55:46.171322 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3e0555-dd40-4a68-bcd1-df6fb4be45df" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.171344 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3e0555-dd40-4a68-bcd1-df6fb4be45df" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.171533 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3e0555-dd40-4a68-bcd1-df6fb4be45df" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.172325 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.178171 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.178227 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.178481 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.182338 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8"] Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.184669 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.338227 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c862n\" (UniqueName: \"kubernetes.io/projected/f5fefa11-ffb3-491d-90e5-c957a37896ef-kube-api-access-c862n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.338659 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.338706 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.440888 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c862n\" (UniqueName: \"kubernetes.io/projected/f5fefa11-ffb3-491d-90e5-c957a37896ef-kube-api-access-c862n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.441057 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.441075 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.445532 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.456380 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.462655 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c862n\" (UniqueName: \"kubernetes.io/projected/f5fefa11-ffb3-491d-90e5-c957a37896ef-kube-api-access-c862n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.491718 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:46 crc kubenswrapper[4992]: I1211 08:55:46.993369 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8"] Dec 11 08:55:47 crc kubenswrapper[4992]: I1211 08:55:47.002760 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 08:55:47 crc kubenswrapper[4992]: I1211 08:55:47.076963 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" event={"ID":"f5fefa11-ffb3-491d-90e5-c957a37896ef","Type":"ContainerStarted","Data":"631233493a601c3c54f1c816abb45ed85576de73501484657abe1bcffaf6746a"} Dec 11 08:55:49 crc kubenswrapper[4992]: I1211 08:55:49.096949 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" event={"ID":"f5fefa11-ffb3-491d-90e5-c957a37896ef","Type":"ContainerStarted","Data":"8db3dd1f9dae0b828d4f175d3ced9d5cbff8d8b0873830d55d76d4e012e05756"} Dec 11 08:55:49 crc kubenswrapper[4992]: I1211 08:55:49.115512 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" podStartSLOduration=2.210005333 podStartE2EDuration="3.115492247s" podCreationTimestamp="2025-12-11 08:55:46 +0000 UTC" firstStartedPulling="2025-12-11 08:55:47.0025309 +0000 UTC m=+1971.262004826" lastFinishedPulling="2025-12-11 08:55:47.908017814 +0000 UTC m=+1972.167491740" observedRunningTime="2025-12-11 08:55:49.113134349 +0000 UTC m=+1973.372608285" watchObservedRunningTime="2025-12-11 08:55:49.115492247 +0000 UTC m=+1973.374966173" Dec 11 08:55:53 crc kubenswrapper[4992]: I1211 08:55:53.137854 4992 generic.go:334] "Generic (PLEG): container finished" podID="f5fefa11-ffb3-491d-90e5-c957a37896ef" containerID="8db3dd1f9dae0b828d4f175d3ced9d5cbff8d8b0873830d55d76d4e012e05756" exitCode=0 Dec 11 08:55:53 crc kubenswrapper[4992]: I1211 08:55:53.138484 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" event={"ID":"f5fefa11-ffb3-491d-90e5-c957a37896ef","Type":"ContainerDied","Data":"8db3dd1f9dae0b828d4f175d3ced9d5cbff8d8b0873830d55d76d4e012e05756"} Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.614451 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.794076 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c862n\" (UniqueName: \"kubernetes.io/projected/f5fefa11-ffb3-491d-90e5-c957a37896ef-kube-api-access-c862n\") pod \"f5fefa11-ffb3-491d-90e5-c957a37896ef\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.794658 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-inventory\") pod \"f5fefa11-ffb3-491d-90e5-c957a37896ef\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.794769 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-ssh-key\") pod \"f5fefa11-ffb3-491d-90e5-c957a37896ef\" (UID: \"f5fefa11-ffb3-491d-90e5-c957a37896ef\") " Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.800302 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fefa11-ffb3-491d-90e5-c957a37896ef-kube-api-access-c862n" (OuterVolumeSpecName: "kube-api-access-c862n") pod "f5fefa11-ffb3-491d-90e5-c957a37896ef" (UID: "f5fefa11-ffb3-491d-90e5-c957a37896ef"). InnerVolumeSpecName "kube-api-access-c862n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.831373 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-inventory" (OuterVolumeSpecName: "inventory") pod "f5fefa11-ffb3-491d-90e5-c957a37896ef" (UID: "f5fefa11-ffb3-491d-90e5-c957a37896ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.848959 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f5fefa11-ffb3-491d-90e5-c957a37896ef" (UID: "f5fefa11-ffb3-491d-90e5-c957a37896ef"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.897692 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.897736 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f5fefa11-ffb3-491d-90e5-c957a37896ef-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:55:54 crc kubenswrapper[4992]: I1211 08:55:54.897749 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c862n\" (UniqueName: \"kubernetes.io/projected/f5fefa11-ffb3-491d-90e5-c957a37896ef-kube-api-access-c862n\") on node \"crc\" DevicePath \"\"" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.157997 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" event={"ID":"f5fefa11-ffb3-491d-90e5-c957a37896ef","Type":"ContainerDied","Data":"631233493a601c3c54f1c816abb45ed85576de73501484657abe1bcffaf6746a"} Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.158069 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="631233493a601c3c54f1c816abb45ed85576de73501484657abe1bcffaf6746a" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.158141 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.217892 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c"] Dec 11 08:55:55 crc kubenswrapper[4992]: E1211 08:55:55.218271 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fefa11-ffb3-491d-90e5-c957a37896ef" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.218291 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fefa11-ffb3-491d-90e5-c957a37896ef" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.218486 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fefa11-ffb3-491d-90e5-c957a37896ef" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.219140 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.223033 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.223374 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.225110 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.225924 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.235267 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c"] Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.303726 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v4q2c\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.303790 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v4q2c\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.303822 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km4sz\" (UniqueName: \"kubernetes.io/projected/865a4175-ac8e-43c9-ab29-824386311e22-kube-api-access-km4sz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v4q2c\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.405539 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v4q2c\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.405603 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v4q2c\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.405646 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km4sz\" (UniqueName: \"kubernetes.io/projected/865a4175-ac8e-43c9-ab29-824386311e22-kube-api-access-km4sz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v4q2c\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.409180 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v4q2c\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.418294 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v4q2c\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.422944 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km4sz\" (UniqueName: \"kubernetes.io/projected/865a4175-ac8e-43c9-ab29-824386311e22-kube-api-access-km4sz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-v4q2c\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:55 crc kubenswrapper[4992]: I1211 08:55:55.554837 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:55:56 crc kubenswrapper[4992]: I1211 08:55:56.073786 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c"] Dec 11 08:55:56 crc kubenswrapper[4992]: I1211 08:55:56.168608 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" event={"ID":"865a4175-ac8e-43c9-ab29-824386311e22","Type":"ContainerStarted","Data":"c162982e4aff05017fa3399a6f4e0a84507666759fdfdc0e9d6cf5a172216f3d"} Dec 11 08:55:56 crc kubenswrapper[4992]: I1211 08:55:56.540834 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:55:57 crc kubenswrapper[4992]: I1211 08:55:57.177351 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" event={"ID":"865a4175-ac8e-43c9-ab29-824386311e22","Type":"ContainerStarted","Data":"8627d3b7f290d373391567584d86802ce1843970cf02bf63902b8e9b471ec27e"} Dec 11 08:55:57 crc kubenswrapper[4992]: I1211 08:55:57.203405 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" podStartSLOduration=1.742562139 podStartE2EDuration="2.203385423s" podCreationTimestamp="2025-12-11 08:55:55 +0000 UTC" firstStartedPulling="2025-12-11 08:55:56.077374758 +0000 UTC m=+1980.336848684" lastFinishedPulling="2025-12-11 08:55:56.538198032 +0000 UTC m=+1980.797671968" observedRunningTime="2025-12-11 08:55:57.196888205 +0000 UTC m=+1981.456362131" watchObservedRunningTime="2025-12-11 08:55:57.203385423 +0000 UTC m=+1981.462859349" Dec 11 08:56:02 crc kubenswrapper[4992]: I1211 08:56:02.065832 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-dl5hv"] Dec 11 08:56:02 crc kubenswrapper[4992]: I1211 08:56:02.074454 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-dl5hv"] Dec 11 08:56:02 crc kubenswrapper[4992]: I1211 08:56:02.106652 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d5ca90-15ec-4992-b942-c8d63cd82ea6" path="/var/lib/kubelet/pods/e5d5ca90-15ec-4992-b942-c8d63cd82ea6/volumes" Dec 11 08:56:03 crc kubenswrapper[4992]: I1211 08:56:03.028424 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9mjph"] Dec 11 08:56:03 crc kubenswrapper[4992]: I1211 08:56:03.040263 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9mjph"] Dec 11 08:56:04 crc kubenswrapper[4992]: I1211 08:56:04.107199 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765acb41-3b1b-4ff0-a57e-9334876b8750" path="/var/lib/kubelet/pods/765acb41-3b1b-4ff0-a57e-9334876b8750/volumes" Dec 11 08:56:11 crc kubenswrapper[4992]: I1211 08:56:11.584936 4992 scope.go:117] "RemoveContainer" containerID="27ca317d24e17faf4d99bfb2df1f2645a6a5e4a67c43cd8450cc3b61f210c631" Dec 11 08:56:11 crc kubenswrapper[4992]: I1211 08:56:11.642674 4992 scope.go:117] "RemoveContainer" containerID="161173e658d40330fe5968c18371acedb040a57ecb50a9236c73daca5cf866ab" Dec 11 08:56:11 crc kubenswrapper[4992]: I1211 08:56:11.696900 4992 scope.go:117] "RemoveContainer" containerID="1b14cbf457af962f9c8b2c30b2b84e577e1b6977f4b88290314fd9d29c4638aa" Dec 11 08:56:35 crc kubenswrapper[4992]: I1211 08:56:35.511547 4992 generic.go:334] "Generic (PLEG): container finished" podID="865a4175-ac8e-43c9-ab29-824386311e22" containerID="8627d3b7f290d373391567584d86802ce1843970cf02bf63902b8e9b471ec27e" exitCode=0 Dec 11 08:56:35 crc kubenswrapper[4992]: I1211 08:56:35.511689 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" event={"ID":"865a4175-ac8e-43c9-ab29-824386311e22","Type":"ContainerDied","Data":"8627d3b7f290d373391567584d86802ce1843970cf02bf63902b8e9b471ec27e"} Dec 11 08:56:36 crc kubenswrapper[4992]: I1211 08:56:36.940526 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.131915 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-ssh-key\") pod \"865a4175-ac8e-43c9-ab29-824386311e22\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.132259 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km4sz\" (UniqueName: \"kubernetes.io/projected/865a4175-ac8e-43c9-ab29-824386311e22-kube-api-access-km4sz\") pod \"865a4175-ac8e-43c9-ab29-824386311e22\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.132292 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-inventory\") pod \"865a4175-ac8e-43c9-ab29-824386311e22\" (UID: \"865a4175-ac8e-43c9-ab29-824386311e22\") " Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.140991 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865a4175-ac8e-43c9-ab29-824386311e22-kube-api-access-km4sz" (OuterVolumeSpecName: "kube-api-access-km4sz") pod "865a4175-ac8e-43c9-ab29-824386311e22" (UID: "865a4175-ac8e-43c9-ab29-824386311e22"). InnerVolumeSpecName "kube-api-access-km4sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.162156 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-inventory" (OuterVolumeSpecName: "inventory") pod "865a4175-ac8e-43c9-ab29-824386311e22" (UID: "865a4175-ac8e-43c9-ab29-824386311e22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.166569 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "865a4175-ac8e-43c9-ab29-824386311e22" (UID: "865a4175-ac8e-43c9-ab29-824386311e22"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.233982 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km4sz\" (UniqueName: \"kubernetes.io/projected/865a4175-ac8e-43c9-ab29-824386311e22-kube-api-access-km4sz\") on node \"crc\" DevicePath \"\"" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.234013 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.234025 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/865a4175-ac8e-43c9-ab29-824386311e22-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.529017 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" event={"ID":"865a4175-ac8e-43c9-ab29-824386311e22","Type":"ContainerDied","Data":"c162982e4aff05017fa3399a6f4e0a84507666759fdfdc0e9d6cf5a172216f3d"} Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.529363 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c162982e4aff05017fa3399a6f4e0a84507666759fdfdc0e9d6cf5a172216f3d" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.529102 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-v4q2c" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.625143 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm"] Dec 11 08:56:37 crc kubenswrapper[4992]: E1211 08:56:37.625536 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865a4175-ac8e-43c9-ab29-824386311e22" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.625555 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="865a4175-ac8e-43c9-ab29-824386311e22" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.625775 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="865a4175-ac8e-43c9-ab29-824386311e22" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.626470 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.629574 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.629836 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.629919 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.631530 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.638829 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm"] Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.743280 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mxttm\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.743361 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mxttm\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.743593 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82pxx\" (UniqueName: \"kubernetes.io/projected/e80e1d51-3960-4957-95e0-987fc9b78120-kube-api-access-82pxx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mxttm\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.845798 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mxttm\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.845920 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mxttm\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.845998 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82pxx\" (UniqueName: \"kubernetes.io/projected/e80e1d51-3960-4957-95e0-987fc9b78120-kube-api-access-82pxx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mxttm\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.850459 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mxttm\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.850511 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mxttm\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.867890 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82pxx\" (UniqueName: \"kubernetes.io/projected/e80e1d51-3960-4957-95e0-987fc9b78120-kube-api-access-82pxx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-mxttm\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:37 crc kubenswrapper[4992]: I1211 08:56:37.950907 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:56:38 crc kubenswrapper[4992]: I1211 08:56:38.442790 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm"] Dec 11 08:56:38 crc kubenswrapper[4992]: I1211 08:56:38.536873 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" event={"ID":"e80e1d51-3960-4957-95e0-987fc9b78120","Type":"ContainerStarted","Data":"820fe83b2482ab8b00d4a2bda4ecf039368a32e0acec623a9c404c4474ebc3a7"} Dec 11 08:56:40 crc kubenswrapper[4992]: I1211 08:56:40.554342 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" event={"ID":"e80e1d51-3960-4957-95e0-987fc9b78120","Type":"ContainerStarted","Data":"3a0e44b90191d6b345e2ad438576b6f83e3d1c51f9be1881c8b6d8e4f7628ea3"} Dec 11 08:56:40 crc kubenswrapper[4992]: I1211 08:56:40.574988 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" podStartSLOduration=1.6901880280000001 podStartE2EDuration="3.574967707s" podCreationTimestamp="2025-12-11 08:56:37 +0000 UTC" firstStartedPulling="2025-12-11 08:56:38.44725586 +0000 UTC m=+2022.706729786" lastFinishedPulling="2025-12-11 08:56:40.332035539 +0000 UTC m=+2024.591509465" observedRunningTime="2025-12-11 08:56:40.567729331 +0000 UTC m=+2024.827203277" watchObservedRunningTime="2025-12-11 08:56:40.574967707 +0000 UTC m=+2024.834441633" Dec 11 08:56:45 crc kubenswrapper[4992]: I1211 08:56:45.064954 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-549zl"] Dec 11 08:56:45 crc kubenswrapper[4992]: I1211 08:56:45.076712 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-549zl"] Dec 11 08:56:46 crc kubenswrapper[4992]: I1211 08:56:46.115240 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="801312c0-357d-4900-aa2d-fabe849fb634" path="/var/lib/kubelet/pods/801312c0-357d-4900-aa2d-fabe849fb634/volumes" Dec 11 08:57:05 crc kubenswrapper[4992]: I1211 08:57:05.379321 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:57:05 crc kubenswrapper[4992]: I1211 08:57:05.380978 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:57:11 crc kubenswrapper[4992]: I1211 08:57:11.787330 4992 scope.go:117] "RemoveContainer" containerID="fa0279365301f7b1934bb5d759f88b31923e81cc44e07229edccbf0da87e7a39" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.684777 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zk99w"] Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.687605 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.694321 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zk99w"] Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.818653 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-catalog-content\") pod \"redhat-operators-zk99w\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.818935 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brwq\" (UniqueName: \"kubernetes.io/projected/7761c1f8-1a6c-4b31-aeac-172cde13d727-kube-api-access-5brwq\") pod \"redhat-operators-zk99w\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.819140 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-utilities\") pod \"redhat-operators-zk99w\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.920484 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5brwq\" (UniqueName: \"kubernetes.io/projected/7761c1f8-1a6c-4b31-aeac-172cde13d727-kube-api-access-5brwq\") pod \"redhat-operators-zk99w\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.920573 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-utilities\") pod \"redhat-operators-zk99w\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.920663 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-catalog-content\") pod \"redhat-operators-zk99w\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.921150 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-catalog-content\") pod \"redhat-operators-zk99w\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.921153 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-utilities\") pod \"redhat-operators-zk99w\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:17 crc kubenswrapper[4992]: I1211 08:57:17.969723 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5brwq\" (UniqueName: \"kubernetes.io/projected/7761c1f8-1a6c-4b31-aeac-172cde13d727-kube-api-access-5brwq\") pod \"redhat-operators-zk99w\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:18 crc kubenswrapper[4992]: I1211 08:57:18.018067 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:18 crc kubenswrapper[4992]: I1211 08:57:18.515214 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zk99w"] Dec 11 08:57:18 crc kubenswrapper[4992]: I1211 08:57:18.893069 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk99w" event={"ID":"7761c1f8-1a6c-4b31-aeac-172cde13d727","Type":"ContainerStarted","Data":"114c47c4dab0dcfd94fdb415a5e4fc05327bf166dda006d7375e1fbfb6a2d50d"} Dec 11 08:57:19 crc kubenswrapper[4992]: I1211 08:57:19.905395 4992 generic.go:334] "Generic (PLEG): container finished" podID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerID="7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3" exitCode=0 Dec 11 08:57:19 crc kubenswrapper[4992]: I1211 08:57:19.905534 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk99w" event={"ID":"7761c1f8-1a6c-4b31-aeac-172cde13d727","Type":"ContainerDied","Data":"7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3"} Dec 11 08:57:25 crc kubenswrapper[4992]: I1211 08:57:25.958602 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk99w" event={"ID":"7761c1f8-1a6c-4b31-aeac-172cde13d727","Type":"ContainerStarted","Data":"8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a"} Dec 11 08:57:27 crc kubenswrapper[4992]: I1211 08:57:27.978177 4992 generic.go:334] "Generic (PLEG): container finished" podID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerID="8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a" exitCode=0 Dec 11 08:57:27 crc kubenswrapper[4992]: I1211 08:57:27.978275 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk99w" event={"ID":"7761c1f8-1a6c-4b31-aeac-172cde13d727","Type":"ContainerDied","Data":"8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a"} Dec 11 08:57:29 crc kubenswrapper[4992]: I1211 08:57:29.998440 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk99w" event={"ID":"7761c1f8-1a6c-4b31-aeac-172cde13d727","Type":"ContainerStarted","Data":"2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28"} Dec 11 08:57:30 crc kubenswrapper[4992]: I1211 08:57:30.018746 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zk99w" podStartSLOduration=3.504908285 podStartE2EDuration="13.018725334s" podCreationTimestamp="2025-12-11 08:57:17 +0000 UTC" firstStartedPulling="2025-12-11 08:57:19.908111223 +0000 UTC m=+2064.167585159" lastFinishedPulling="2025-12-11 08:57:29.421928282 +0000 UTC m=+2073.681402208" observedRunningTime="2025-12-11 08:57:30.013488017 +0000 UTC m=+2074.272961963" watchObservedRunningTime="2025-12-11 08:57:30.018725334 +0000 UTC m=+2074.278199260" Dec 11 08:57:32 crc kubenswrapper[4992]: I1211 08:57:32.015406 4992 generic.go:334] "Generic (PLEG): container finished" podID="e80e1d51-3960-4957-95e0-987fc9b78120" containerID="3a0e44b90191d6b345e2ad438576b6f83e3d1c51f9be1881c8b6d8e4f7628ea3" exitCode=0 Dec 11 08:57:32 crc kubenswrapper[4992]: I1211 08:57:32.015456 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" event={"ID":"e80e1d51-3960-4957-95e0-987fc9b78120","Type":"ContainerDied","Data":"3a0e44b90191d6b345e2ad438576b6f83e3d1c51f9be1881c8b6d8e4f7628ea3"} Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.428473 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.524174 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-inventory\") pod \"e80e1d51-3960-4957-95e0-987fc9b78120\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.524263 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82pxx\" (UniqueName: \"kubernetes.io/projected/e80e1d51-3960-4957-95e0-987fc9b78120-kube-api-access-82pxx\") pod \"e80e1d51-3960-4957-95e0-987fc9b78120\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.524343 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-ssh-key\") pod \"e80e1d51-3960-4957-95e0-987fc9b78120\" (UID: \"e80e1d51-3960-4957-95e0-987fc9b78120\") " Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.530360 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80e1d51-3960-4957-95e0-987fc9b78120-kube-api-access-82pxx" (OuterVolumeSpecName: "kube-api-access-82pxx") pod "e80e1d51-3960-4957-95e0-987fc9b78120" (UID: "e80e1d51-3960-4957-95e0-987fc9b78120"). InnerVolumeSpecName "kube-api-access-82pxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.551235 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e80e1d51-3960-4957-95e0-987fc9b78120" (UID: "e80e1d51-3960-4957-95e0-987fc9b78120"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.552217 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-inventory" (OuterVolumeSpecName: "inventory") pod "e80e1d51-3960-4957-95e0-987fc9b78120" (UID: "e80e1d51-3960-4957-95e0-987fc9b78120"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.626682 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.626726 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82pxx\" (UniqueName: \"kubernetes.io/projected/e80e1d51-3960-4957-95e0-987fc9b78120-kube-api-access-82pxx\") on node \"crc\" DevicePath \"\"" Dec 11 08:57:33 crc kubenswrapper[4992]: I1211 08:57:33.626745 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e80e1d51-3960-4957-95e0-987fc9b78120-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.032001 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" event={"ID":"e80e1d51-3960-4957-95e0-987fc9b78120","Type":"ContainerDied","Data":"820fe83b2482ab8b00d4a2bda4ecf039368a32e0acec623a9c404c4474ebc3a7"} Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.032048 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820fe83b2482ab8b00d4a2bda4ecf039368a32e0acec623a9c404c4474ebc3a7" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.032060 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-mxttm" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.125776 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qd5rv"] Dec 11 08:57:34 crc kubenswrapper[4992]: E1211 08:57:34.134899 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80e1d51-3960-4957-95e0-987fc9b78120" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.134955 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80e1d51-3960-4957-95e0-987fc9b78120" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.136357 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80e1d51-3960-4957-95e0-987fc9b78120" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.137373 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.140355 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.140910 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.141053 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.141447 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.143676 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qd5rv"] Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.237133 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qd5rv\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.237203 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qd5rv\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.237230 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmhl\" (UniqueName: \"kubernetes.io/projected/b8616302-c54e-49e1-98cb-924b70e8050f-kube-api-access-7hmhl\") pod \"ssh-known-hosts-edpm-deployment-qd5rv\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.339301 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qd5rv\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.339397 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qd5rv\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.339432 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hmhl\" (UniqueName: \"kubernetes.io/projected/b8616302-c54e-49e1-98cb-924b70e8050f-kube-api-access-7hmhl\") pod \"ssh-known-hosts-edpm-deployment-qd5rv\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.345225 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qd5rv\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.350152 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qd5rv\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.359328 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hmhl\" (UniqueName: \"kubernetes.io/projected/b8616302-c54e-49e1-98cb-924b70e8050f-kube-api-access-7hmhl\") pod \"ssh-known-hosts-edpm-deployment-qd5rv\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:34 crc kubenswrapper[4992]: I1211 08:57:34.459367 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:35 crc kubenswrapper[4992]: I1211 08:57:35.004867 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qd5rv"] Dec 11 08:57:35 crc kubenswrapper[4992]: W1211 08:57:35.016609 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8616302_c54e_49e1_98cb_924b70e8050f.slice/crio-07a7ec531e81128c9f7e3f0f1a560a27f7e528e40466d96e933c736eff6c620f WatchSource:0}: Error finding container 07a7ec531e81128c9f7e3f0f1a560a27f7e528e40466d96e933c736eff6c620f: Status 404 returned error can't find the container with id 07a7ec531e81128c9f7e3f0f1a560a27f7e528e40466d96e933c736eff6c620f Dec 11 08:57:35 crc kubenswrapper[4992]: I1211 08:57:35.056125 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" event={"ID":"b8616302-c54e-49e1-98cb-924b70e8050f","Type":"ContainerStarted","Data":"07a7ec531e81128c9f7e3f0f1a560a27f7e528e40466d96e933c736eff6c620f"} Dec 11 08:57:35 crc kubenswrapper[4992]: I1211 08:57:35.378396 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:57:35 crc kubenswrapper[4992]: I1211 08:57:35.378453 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:57:36 crc kubenswrapper[4992]: I1211 08:57:36.066480 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" event={"ID":"b8616302-c54e-49e1-98cb-924b70e8050f","Type":"ContainerStarted","Data":"408b069e5236059442bdb9a55704bfcbecba24d9df774638a984af5271dc6bad"} Dec 11 08:57:36 crc kubenswrapper[4992]: I1211 08:57:36.095857 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" podStartSLOduration=1.6045436 podStartE2EDuration="2.095838398s" podCreationTimestamp="2025-12-11 08:57:34 +0000 UTC" firstStartedPulling="2025-12-11 08:57:35.019421472 +0000 UTC m=+2079.278895398" lastFinishedPulling="2025-12-11 08:57:35.51071626 +0000 UTC m=+2079.770190196" observedRunningTime="2025-12-11 08:57:36.086184752 +0000 UTC m=+2080.345658698" watchObservedRunningTime="2025-12-11 08:57:36.095838398 +0000 UTC m=+2080.355312324" Dec 11 08:57:38 crc kubenswrapper[4992]: I1211 08:57:38.018958 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:38 crc kubenswrapper[4992]: I1211 08:57:38.019324 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:38 crc kubenswrapper[4992]: I1211 08:57:38.075685 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:38 crc kubenswrapper[4992]: I1211 08:57:38.157697 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:38 crc kubenswrapper[4992]: I1211 08:57:38.307224 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zk99w"] Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.123058 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zk99w" podUID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerName="registry-server" containerID="cri-o://2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28" gracePeriod=2 Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.576447 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.759647 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-utilities\") pod \"7761c1f8-1a6c-4b31-aeac-172cde13d727\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.759805 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5brwq\" (UniqueName: \"kubernetes.io/projected/7761c1f8-1a6c-4b31-aeac-172cde13d727-kube-api-access-5brwq\") pod \"7761c1f8-1a6c-4b31-aeac-172cde13d727\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.759834 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-catalog-content\") pod \"7761c1f8-1a6c-4b31-aeac-172cde13d727\" (UID: \"7761c1f8-1a6c-4b31-aeac-172cde13d727\") " Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.760916 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-utilities" (OuterVolumeSpecName: "utilities") pod "7761c1f8-1a6c-4b31-aeac-172cde13d727" (UID: "7761c1f8-1a6c-4b31-aeac-172cde13d727"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.765315 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7761c1f8-1a6c-4b31-aeac-172cde13d727-kube-api-access-5brwq" (OuterVolumeSpecName: "kube-api-access-5brwq") pod "7761c1f8-1a6c-4b31-aeac-172cde13d727" (UID: "7761c1f8-1a6c-4b31-aeac-172cde13d727"). InnerVolumeSpecName "kube-api-access-5brwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.862471 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5brwq\" (UniqueName: \"kubernetes.io/projected/7761c1f8-1a6c-4b31-aeac-172cde13d727-kube-api-access-5brwq\") on node \"crc\" DevicePath \"\"" Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.862508 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.884026 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7761c1f8-1a6c-4b31-aeac-172cde13d727" (UID: "7761c1f8-1a6c-4b31-aeac-172cde13d727"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 08:57:40 crc kubenswrapper[4992]: I1211 08:57:40.963482 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7761c1f8-1a6c-4b31-aeac-172cde13d727-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.135511 4992 generic.go:334] "Generic (PLEG): container finished" podID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerID="2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28" exitCode=0 Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.135562 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk99w" event={"ID":"7761c1f8-1a6c-4b31-aeac-172cde13d727","Type":"ContainerDied","Data":"2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28"} Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.135600 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zk99w" event={"ID":"7761c1f8-1a6c-4b31-aeac-172cde13d727","Type":"ContainerDied","Data":"114c47c4dab0dcfd94fdb415a5e4fc05327bf166dda006d7375e1fbfb6a2d50d"} Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.135626 4992 scope.go:117] "RemoveContainer" containerID="2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.135693 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zk99w" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.177321 4992 scope.go:117] "RemoveContainer" containerID="8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.184561 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zk99w"] Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.197431 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zk99w"] Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.205395 4992 scope.go:117] "RemoveContainer" containerID="7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.254905 4992 scope.go:117] "RemoveContainer" containerID="2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28" Dec 11 08:57:41 crc kubenswrapper[4992]: E1211 08:57:41.258129 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28\": container with ID starting with 2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28 not found: ID does not exist" containerID="2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.258179 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28"} err="failed to get container status \"2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28\": rpc error: code = NotFound desc = could not find container \"2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28\": container with ID starting with 2804e7a7e3be2af7bb4dc0487c1c00c86bbcc8736b4a7080daf8e348a7fabf28 not found: ID does not exist" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.258212 4992 scope.go:117] "RemoveContainer" containerID="8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a" Dec 11 08:57:41 crc kubenswrapper[4992]: E1211 08:57:41.258675 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a\": container with ID starting with 8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a not found: ID does not exist" containerID="8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.258728 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a"} err="failed to get container status \"8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a\": rpc error: code = NotFound desc = could not find container \"8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a\": container with ID starting with 8fc0418e7c3de3a4fd7deba23a5388c9254d45cd50789eb2401aaa0dfa7b3c7a not found: ID does not exist" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.258766 4992 scope.go:117] "RemoveContainer" containerID="7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3" Dec 11 08:57:41 crc kubenswrapper[4992]: E1211 08:57:41.259103 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3\": container with ID starting with 7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3 not found: ID does not exist" containerID="7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3" Dec 11 08:57:41 crc kubenswrapper[4992]: I1211 08:57:41.259127 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3"} err="failed to get container status \"7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3\": rpc error: code = NotFound desc = could not find container \"7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3\": container with ID starting with 7ff629bbd887c4047589a0688251927dedb9b433c9ced035bdad3875ece5b1e3 not found: ID does not exist" Dec 11 08:57:42 crc kubenswrapper[4992]: I1211 08:57:42.114747 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7761c1f8-1a6c-4b31-aeac-172cde13d727" path="/var/lib/kubelet/pods/7761c1f8-1a6c-4b31-aeac-172cde13d727/volumes" Dec 11 08:57:43 crc kubenswrapper[4992]: I1211 08:57:43.156793 4992 generic.go:334] "Generic (PLEG): container finished" podID="b8616302-c54e-49e1-98cb-924b70e8050f" containerID="408b069e5236059442bdb9a55704bfcbecba24d9df774638a984af5271dc6bad" exitCode=0 Dec 11 08:57:43 crc kubenswrapper[4992]: I1211 08:57:43.156859 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" event={"ID":"b8616302-c54e-49e1-98cb-924b70e8050f","Type":"ContainerDied","Data":"408b069e5236059442bdb9a55704bfcbecba24d9df774638a984af5271dc6bad"} Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.563905 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.639231 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-inventory-0\") pod \"b8616302-c54e-49e1-98cb-924b70e8050f\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.639312 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-ssh-key-openstack-edpm-ipam\") pod \"b8616302-c54e-49e1-98cb-924b70e8050f\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.639615 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hmhl\" (UniqueName: \"kubernetes.io/projected/b8616302-c54e-49e1-98cb-924b70e8050f-kube-api-access-7hmhl\") pod \"b8616302-c54e-49e1-98cb-924b70e8050f\" (UID: \"b8616302-c54e-49e1-98cb-924b70e8050f\") " Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.645223 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8616302-c54e-49e1-98cb-924b70e8050f-kube-api-access-7hmhl" (OuterVolumeSpecName: "kube-api-access-7hmhl") pod "b8616302-c54e-49e1-98cb-924b70e8050f" (UID: "b8616302-c54e-49e1-98cb-924b70e8050f"). InnerVolumeSpecName "kube-api-access-7hmhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.666231 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b8616302-c54e-49e1-98cb-924b70e8050f" (UID: "b8616302-c54e-49e1-98cb-924b70e8050f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.666887 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b8616302-c54e-49e1-98cb-924b70e8050f" (UID: "b8616302-c54e-49e1-98cb-924b70e8050f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.742398 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hmhl\" (UniqueName: \"kubernetes.io/projected/b8616302-c54e-49e1-98cb-924b70e8050f-kube-api-access-7hmhl\") on node \"crc\" DevicePath \"\"" Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.742435 4992 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:57:44 crc kubenswrapper[4992]: I1211 08:57:44.742444 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b8616302-c54e-49e1-98cb-924b70e8050f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.173308 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" event={"ID":"b8616302-c54e-49e1-98cb-924b70e8050f","Type":"ContainerDied","Data":"07a7ec531e81128c9f7e3f0f1a560a27f7e528e40466d96e933c736eff6c620f"} Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.173358 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a7ec531e81128c9f7e3f0f1a560a27f7e528e40466d96e933c736eff6c620f" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.173845 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qd5rv" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.259649 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt"] Dec 11 08:57:45 crc kubenswrapper[4992]: E1211 08:57:45.260111 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerName="extract-utilities" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.260136 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerName="extract-utilities" Dec 11 08:57:45 crc kubenswrapper[4992]: E1211 08:57:45.260164 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8616302-c54e-49e1-98cb-924b70e8050f" containerName="ssh-known-hosts-edpm-deployment" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.260173 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8616302-c54e-49e1-98cb-924b70e8050f" containerName="ssh-known-hosts-edpm-deployment" Dec 11 08:57:45 crc kubenswrapper[4992]: E1211 08:57:45.260195 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerName="registry-server" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.260202 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerName="registry-server" Dec 11 08:57:45 crc kubenswrapper[4992]: E1211 08:57:45.260226 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerName="extract-content" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.260235 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerName="extract-content" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.260553 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8616302-c54e-49e1-98cb-924b70e8050f" containerName="ssh-known-hosts-edpm-deployment" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.260581 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7761c1f8-1a6c-4b31-aeac-172cde13d727" containerName="registry-server" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.261500 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.264674 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.264793 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.264894 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.264957 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.273941 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt"] Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.454991 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bb5gt\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.455092 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbzb\" (UniqueName: \"kubernetes.io/projected/daffe0c7-1479-4565-9035-46e508469995-kube-api-access-rpbzb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bb5gt\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.455359 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bb5gt\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.557710 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bb5gt\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.557792 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpbzb\" (UniqueName: \"kubernetes.io/projected/daffe0c7-1479-4565-9035-46e508469995-kube-api-access-rpbzb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bb5gt\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.557884 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bb5gt\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.563363 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bb5gt\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.563363 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bb5gt\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.597469 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpbzb\" (UniqueName: \"kubernetes.io/projected/daffe0c7-1479-4565-9035-46e508469995-kube-api-access-rpbzb\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bb5gt\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:45 crc kubenswrapper[4992]: I1211 08:57:45.883115 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:46 crc kubenswrapper[4992]: I1211 08:57:46.379148 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt"] Dec 11 08:57:47 crc kubenswrapper[4992]: I1211 08:57:47.194747 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" event={"ID":"daffe0c7-1479-4565-9035-46e508469995","Type":"ContainerStarted","Data":"cd48422067afe821b904a42e1906c794a98c98fea362d87770184e39486661b7"} Dec 11 08:57:49 crc kubenswrapper[4992]: I1211 08:57:49.226203 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" event={"ID":"daffe0c7-1479-4565-9035-46e508469995","Type":"ContainerStarted","Data":"f91ccb31f93bdcb0d8ed2924c55662d1d7738597ad350b5fbfc27cd9da8125fa"} Dec 11 08:57:49 crc kubenswrapper[4992]: I1211 08:57:49.250224 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" podStartSLOduration=2.287116066 podStartE2EDuration="4.250207776s" podCreationTimestamp="2025-12-11 08:57:45 +0000 UTC" firstStartedPulling="2025-12-11 08:57:46.386531801 +0000 UTC m=+2090.646005727" lastFinishedPulling="2025-12-11 08:57:48.349623511 +0000 UTC m=+2092.609097437" observedRunningTime="2025-12-11 08:57:49.249037267 +0000 UTC m=+2093.508511193" watchObservedRunningTime="2025-12-11 08:57:49.250207776 +0000 UTC m=+2093.509681692" Dec 11 08:57:58 crc kubenswrapper[4992]: I1211 08:57:58.310426 4992 generic.go:334] "Generic (PLEG): container finished" podID="daffe0c7-1479-4565-9035-46e508469995" containerID="f91ccb31f93bdcb0d8ed2924c55662d1d7738597ad350b5fbfc27cd9da8125fa" exitCode=0 Dec 11 08:57:58 crc kubenswrapper[4992]: I1211 08:57:58.310530 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" event={"ID":"daffe0c7-1479-4565-9035-46e508469995","Type":"ContainerDied","Data":"f91ccb31f93bdcb0d8ed2924c55662d1d7738597ad350b5fbfc27cd9da8125fa"} Dec 11 08:57:59 crc kubenswrapper[4992]: I1211 08:57:59.748610 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:57:59 crc kubenswrapper[4992]: I1211 08:57:59.934464 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-inventory\") pod \"daffe0c7-1479-4565-9035-46e508469995\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " Dec 11 08:57:59 crc kubenswrapper[4992]: I1211 08:57:59.934733 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-ssh-key\") pod \"daffe0c7-1479-4565-9035-46e508469995\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " Dec 11 08:57:59 crc kubenswrapper[4992]: I1211 08:57:59.934811 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpbzb\" (UniqueName: \"kubernetes.io/projected/daffe0c7-1479-4565-9035-46e508469995-kube-api-access-rpbzb\") pod \"daffe0c7-1479-4565-9035-46e508469995\" (UID: \"daffe0c7-1479-4565-9035-46e508469995\") " Dec 11 08:57:59 crc kubenswrapper[4992]: I1211 08:57:59.950955 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daffe0c7-1479-4565-9035-46e508469995-kube-api-access-rpbzb" (OuterVolumeSpecName: "kube-api-access-rpbzb") pod "daffe0c7-1479-4565-9035-46e508469995" (UID: "daffe0c7-1479-4565-9035-46e508469995"). InnerVolumeSpecName "kube-api-access-rpbzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:57:59 crc kubenswrapper[4992]: I1211 08:57:59.967656 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "daffe0c7-1479-4565-9035-46e508469995" (UID: "daffe0c7-1479-4565-9035-46e508469995"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:57:59 crc kubenswrapper[4992]: I1211 08:57:59.979971 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-inventory" (OuterVolumeSpecName: "inventory") pod "daffe0c7-1479-4565-9035-46e508469995" (UID: "daffe0c7-1479-4565-9035-46e508469995"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.037587 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.037941 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/daffe0c7-1479-4565-9035-46e508469995-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.037954 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpbzb\" (UniqueName: \"kubernetes.io/projected/daffe0c7-1479-4565-9035-46e508469995-kube-api-access-rpbzb\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.331470 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.331437 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bb5gt" event={"ID":"daffe0c7-1479-4565-9035-46e508469995","Type":"ContainerDied","Data":"cd48422067afe821b904a42e1906c794a98c98fea362d87770184e39486661b7"} Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.331738 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd48422067afe821b904a42e1906c794a98c98fea362d87770184e39486661b7" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.396743 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv"] Dec 11 08:58:00 crc kubenswrapper[4992]: E1211 08:58:00.397211 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daffe0c7-1479-4565-9035-46e508469995" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.397235 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="daffe0c7-1479-4565-9035-46e508469995" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.397478 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="daffe0c7-1479-4565-9035-46e508469995" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.398270 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.401001 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.401121 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.401959 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.402133 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.410425 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv"] Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.450355 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.450536 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm59p\" (UniqueName: \"kubernetes.io/projected/57455273-3fb5-408e-a80c-c42880a6b0bf-kube-api-access-pm59p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.450675 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.552304 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm59p\" (UniqueName: \"kubernetes.io/projected/57455273-3fb5-408e-a80c-c42880a6b0bf-kube-api-access-pm59p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.552485 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.552614 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.556969 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.557060 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.574765 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm59p\" (UniqueName: \"kubernetes.io/projected/57455273-3fb5-408e-a80c-c42880a6b0bf-kube-api-access-pm59p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:00 crc kubenswrapper[4992]: I1211 08:58:00.767310 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:01 crc kubenswrapper[4992]: I1211 08:58:01.291374 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv"] Dec 11 08:58:01 crc kubenswrapper[4992]: I1211 08:58:01.343750 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" event={"ID":"57455273-3fb5-408e-a80c-c42880a6b0bf","Type":"ContainerStarted","Data":"de0d2e69a746c2a4a28f0047788dd49f0c05b4cf60a3b5a8a4ab5f8d4288b920"} Dec 11 08:58:02 crc kubenswrapper[4992]: I1211 08:58:02.354552 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" event={"ID":"57455273-3fb5-408e-a80c-c42880a6b0bf","Type":"ContainerStarted","Data":"ca2af55f50e929fe0971f3256140fe35e858cebedb9d53d796d1b38cdd2f16da"} Dec 11 08:58:02 crc kubenswrapper[4992]: I1211 08:58:02.375350 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" podStartSLOduration=1.533480141 podStartE2EDuration="2.375333203s" podCreationTimestamp="2025-12-11 08:58:00 +0000 UTC" firstStartedPulling="2025-12-11 08:58:01.300746342 +0000 UTC m=+2105.560220268" lastFinishedPulling="2025-12-11 08:58:02.142599404 +0000 UTC m=+2106.402073330" observedRunningTime="2025-12-11 08:58:02.367531372 +0000 UTC m=+2106.627005298" watchObservedRunningTime="2025-12-11 08:58:02.375333203 +0000 UTC m=+2106.634807129" Dec 11 08:58:05 crc kubenswrapper[4992]: I1211 08:58:05.378624 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 08:58:05 crc kubenswrapper[4992]: I1211 08:58:05.378921 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 08:58:05 crc kubenswrapper[4992]: I1211 08:58:05.378964 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 08:58:05 crc kubenswrapper[4992]: I1211 08:58:05.379738 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a3dc42b5a3cd43c62970b42a2e0157ec46f68f2e53bb7c9edf2af183a976943"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 08:58:05 crc kubenswrapper[4992]: I1211 08:58:05.379799 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://3a3dc42b5a3cd43c62970b42a2e0157ec46f68f2e53bb7c9edf2af183a976943" gracePeriod=600 Dec 11 08:58:06 crc kubenswrapper[4992]: I1211 08:58:06.389341 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="3a3dc42b5a3cd43c62970b42a2e0157ec46f68f2e53bb7c9edf2af183a976943" exitCode=0 Dec 11 08:58:06 crc kubenswrapper[4992]: I1211 08:58:06.389394 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"3a3dc42b5a3cd43c62970b42a2e0157ec46f68f2e53bb7c9edf2af183a976943"} Dec 11 08:58:06 crc kubenswrapper[4992]: I1211 08:58:06.390129 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f"} Dec 11 08:58:06 crc kubenswrapper[4992]: I1211 08:58:06.390153 4992 scope.go:117] "RemoveContainer" containerID="be1cfbff847312131f4c14356f7cbbc53ba037bcbbbb0e9e55883630853a7f69" Dec 11 08:58:12 crc kubenswrapper[4992]: I1211 08:58:12.446316 4992 generic.go:334] "Generic (PLEG): container finished" podID="57455273-3fb5-408e-a80c-c42880a6b0bf" containerID="ca2af55f50e929fe0971f3256140fe35e858cebedb9d53d796d1b38cdd2f16da" exitCode=0 Dec 11 08:58:12 crc kubenswrapper[4992]: I1211 08:58:12.446435 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" event={"ID":"57455273-3fb5-408e-a80c-c42880a6b0bf","Type":"ContainerDied","Data":"ca2af55f50e929fe0971f3256140fe35e858cebedb9d53d796d1b38cdd2f16da"} Dec 11 08:58:13 crc kubenswrapper[4992]: I1211 08:58:13.882760 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:13 crc kubenswrapper[4992]: I1211 08:58:13.998113 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm59p\" (UniqueName: \"kubernetes.io/projected/57455273-3fb5-408e-a80c-c42880a6b0bf-kube-api-access-pm59p\") pod \"57455273-3fb5-408e-a80c-c42880a6b0bf\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " Dec 11 08:58:13 crc kubenswrapper[4992]: I1211 08:58:13.998462 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-ssh-key\") pod \"57455273-3fb5-408e-a80c-c42880a6b0bf\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " Dec 11 08:58:13 crc kubenswrapper[4992]: I1211 08:58:13.998529 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-inventory\") pod \"57455273-3fb5-408e-a80c-c42880a6b0bf\" (UID: \"57455273-3fb5-408e-a80c-c42880a6b0bf\") " Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.003871 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57455273-3fb5-408e-a80c-c42880a6b0bf-kube-api-access-pm59p" (OuterVolumeSpecName: "kube-api-access-pm59p") pod "57455273-3fb5-408e-a80c-c42880a6b0bf" (UID: "57455273-3fb5-408e-a80c-c42880a6b0bf"). InnerVolumeSpecName "kube-api-access-pm59p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.036012 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "57455273-3fb5-408e-a80c-c42880a6b0bf" (UID: "57455273-3fb5-408e-a80c-c42880a6b0bf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.038929 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-inventory" (OuterVolumeSpecName: "inventory") pod "57455273-3fb5-408e-a80c-c42880a6b0bf" (UID: "57455273-3fb5-408e-a80c-c42880a6b0bf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.101179 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.101229 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/57455273-3fb5-408e-a80c-c42880a6b0bf-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.101249 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm59p\" (UniqueName: \"kubernetes.io/projected/57455273-3fb5-408e-a80c-c42880a6b0bf-kube-api-access-pm59p\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.467459 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" event={"ID":"57455273-3fb5-408e-a80c-c42880a6b0bf","Type":"ContainerDied","Data":"de0d2e69a746c2a4a28f0047788dd49f0c05b4cf60a3b5a8a4ab5f8d4288b920"} Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.467722 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0d2e69a746c2a4a28f0047788dd49f0c05b4cf60a3b5a8a4ab5f8d4288b920" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.467787 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.599814 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf"] Dec 11 08:58:14 crc kubenswrapper[4992]: E1211 08:58:14.616259 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57455273-3fb5-408e-a80c-c42880a6b0bf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.616309 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="57455273-3fb5-408e-a80c-c42880a6b0bf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.617197 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="57455273-3fb5-408e-a80c-c42880a6b0bf" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.618267 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf"] Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.618391 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.624727 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.624822 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.624937 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.625102 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.625180 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.625340 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.625982 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.626205 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722172 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722233 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722254 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722275 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722295 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722353 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722478 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722513 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722536 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722603 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722648 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722665 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722727 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.722775 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwks2\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-kube-api-access-cwks2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.824554 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.824616 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.824681 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825454 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825500 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825519 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825546 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825573 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwks2\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-kube-api-access-cwks2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825676 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825702 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825720 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825749 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825773 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.825829 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.829015 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.830497 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.830499 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.830716 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.830805 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.831090 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.831862 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.832109 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.832225 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.832900 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.833358 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.836233 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.838255 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.844039 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwks2\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-kube-api-access-cwks2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-59bbf\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:14 crc kubenswrapper[4992]: I1211 08:58:14.943593 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:15 crc kubenswrapper[4992]: I1211 08:58:15.519058 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf"] Dec 11 08:58:15 crc kubenswrapper[4992]: W1211 08:58:15.526535 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7243a4bc_1d82_40f0_b28f_f6a181d3771b.slice/crio-064a3f23a42c37fb7a74101a180079259d1c1198a1857bf8f18675c2e384956b WatchSource:0}: Error finding container 064a3f23a42c37fb7a74101a180079259d1c1198a1857bf8f18675c2e384956b: Status 404 returned error can't find the container with id 064a3f23a42c37fb7a74101a180079259d1c1198a1857bf8f18675c2e384956b Dec 11 08:58:16 crc kubenswrapper[4992]: I1211 08:58:16.488378 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" event={"ID":"7243a4bc-1d82-40f0-b28f-f6a181d3771b","Type":"ContainerStarted","Data":"064a3f23a42c37fb7a74101a180079259d1c1198a1857bf8f18675c2e384956b"} Dec 11 08:58:17 crc kubenswrapper[4992]: I1211 08:58:17.500851 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" event={"ID":"7243a4bc-1d82-40f0-b28f-f6a181d3771b","Type":"ContainerStarted","Data":"ab8e47f01c70a9239b727501d8fb419956cd4348c37cfbce25c9eb88f3a988ea"} Dec 11 08:58:17 crc kubenswrapper[4992]: I1211 08:58:17.531262 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" podStartSLOduration=2.6592406029999998 podStartE2EDuration="3.53124434s" podCreationTimestamp="2025-12-11 08:58:14 +0000 UTC" firstStartedPulling="2025-12-11 08:58:15.530519002 +0000 UTC m=+2119.789992948" lastFinishedPulling="2025-12-11 08:58:16.402522759 +0000 UTC m=+2120.661996685" observedRunningTime="2025-12-11 08:58:17.525659714 +0000 UTC m=+2121.785133730" watchObservedRunningTime="2025-12-11 08:58:17.53124434 +0000 UTC m=+2121.790718256" Dec 11 08:58:56 crc kubenswrapper[4992]: I1211 08:58:56.810142 4992 generic.go:334] "Generic (PLEG): container finished" podID="7243a4bc-1d82-40f0-b28f-f6a181d3771b" containerID="ab8e47f01c70a9239b727501d8fb419956cd4348c37cfbce25c9eb88f3a988ea" exitCode=0 Dec 11 08:58:56 crc kubenswrapper[4992]: I1211 08:58:56.810232 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" event={"ID":"7243a4bc-1d82-40f0-b28f-f6a181d3771b","Type":"ContainerDied","Data":"ab8e47f01c70a9239b727501d8fb419956cd4348c37cfbce25c9eb88f3a988ea"} Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.369047 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448159 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-libvirt-combined-ca-bundle\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448207 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-telemetry-combined-ca-bundle\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448241 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ovn-combined-ca-bundle\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448338 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwks2\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-kube-api-access-cwks2\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448429 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-repo-setup-combined-ca-bundle\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448477 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448518 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-neutron-metadata-combined-ca-bundle\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448575 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448601 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-nova-combined-ca-bundle\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448649 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ssh-key\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448697 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-bootstrap-combined-ca-bundle\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448726 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-inventory\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448763 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.448802 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\" (UID: \"7243a4bc-1d82-40f0-b28f-f6a181d3771b\") " Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.455176 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.455940 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.456048 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.456447 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.457867 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.458684 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.459112 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.459229 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.459952 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.460054 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-kube-api-access-cwks2" (OuterVolumeSpecName: "kube-api-access-cwks2") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "kube-api-access-cwks2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.460597 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.461434 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.484302 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.484998 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-inventory" (OuterVolumeSpecName: "inventory") pod "7243a4bc-1d82-40f0-b28f-f6a181d3771b" (UID: "7243a4bc-1d82-40f0-b28f-f6a181d3771b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551765 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwks2\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-kube-api-access-cwks2\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551797 4992 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551808 4992 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551822 4992 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551833 4992 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551853 4992 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551866 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551877 4992 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551888 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551901 4992 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551913 4992 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7243a4bc-1d82-40f0-b28f-f6a181d3771b-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551925 4992 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551934 4992 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.551945 4992 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7243a4bc-1d82-40f0-b28f-f6a181d3771b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.829045 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" event={"ID":"7243a4bc-1d82-40f0-b28f-f6a181d3771b","Type":"ContainerDied","Data":"064a3f23a42c37fb7a74101a180079259d1c1198a1857bf8f18675c2e384956b"} Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.829080 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064a3f23a42c37fb7a74101a180079259d1c1198a1857bf8f18675c2e384956b" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.829128 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-59bbf" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.944670 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr"] Dec 11 08:58:58 crc kubenswrapper[4992]: E1211 08:58:58.945022 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7243a4bc-1d82-40f0-b28f-f6a181d3771b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.945041 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7243a4bc-1d82-40f0-b28f-f6a181d3771b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.945277 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7243a4bc-1d82-40f0-b28f-f6a181d3771b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.945940 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.948627 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.948627 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.948711 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.949384 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.954097 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr"] Dec 11 08:58:58 crc kubenswrapper[4992]: I1211 08:58:58.958221 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.059465 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.059552 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5r9m\" (UniqueName: \"kubernetes.io/projected/a47b817b-7906-4327-ba35-740815f4c02c-kube-api-access-d5r9m\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.059721 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.060025 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a47b817b-7906-4327-ba35-740815f4c02c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.060075 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.161656 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.161741 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5r9m\" (UniqueName: \"kubernetes.io/projected/a47b817b-7906-4327-ba35-740815f4c02c-kube-api-access-d5r9m\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.161851 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.161882 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a47b817b-7906-4327-ba35-740815f4c02c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.161942 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.163113 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a47b817b-7906-4327-ba35-740815f4c02c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.165817 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.166006 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.170283 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.181299 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5r9m\" (UniqueName: \"kubernetes.io/projected/a47b817b-7906-4327-ba35-740815f4c02c-kube-api-access-d5r9m\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-slpxr\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.262875 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.801696 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr"] Dec 11 08:58:59 crc kubenswrapper[4992]: I1211 08:58:59.837555 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" event={"ID":"a47b817b-7906-4327-ba35-740815f4c02c","Type":"ContainerStarted","Data":"bce5ab064ed5fc760e0396c4f6c0a793c6f4d8ed01ae026b0a4f8ea2f2866959"} Dec 11 08:59:00 crc kubenswrapper[4992]: I1211 08:59:00.850455 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" event={"ID":"a47b817b-7906-4327-ba35-740815f4c02c","Type":"ContainerStarted","Data":"318473ffa023d416284a7cbeb19e1f64ea78a5373d2823fcc27da56c6db70eb7"} Dec 11 08:59:00 crc kubenswrapper[4992]: I1211 08:59:00.872536 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" podStartSLOduration=2.434897944 podStartE2EDuration="2.872513171s" podCreationTimestamp="2025-12-11 08:58:58 +0000 UTC" firstStartedPulling="2025-12-11 08:58:59.808514476 +0000 UTC m=+2164.067988402" lastFinishedPulling="2025-12-11 08:59:00.246129703 +0000 UTC m=+2164.505603629" observedRunningTime="2025-12-11 08:59:00.868282358 +0000 UTC m=+2165.127756294" watchObservedRunningTime="2025-12-11 08:59:00.872513171 +0000 UTC m=+2165.131987097" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.163255 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945"] Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.165406 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.169725 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.170074 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.173567 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945"] Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.272837 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561df4eb-a62d-4ce9-bf47-79598eac0eb1-secret-volume\") pod \"collect-profiles-29424060-qd945\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.272922 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt66j\" (UniqueName: \"kubernetes.io/projected/561df4eb-a62d-4ce9-bf47-79598eac0eb1-kube-api-access-qt66j\") pod \"collect-profiles-29424060-qd945\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.273004 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561df4eb-a62d-4ce9-bf47-79598eac0eb1-config-volume\") pod \"collect-profiles-29424060-qd945\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.375151 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561df4eb-a62d-4ce9-bf47-79598eac0eb1-config-volume\") pod \"collect-profiles-29424060-qd945\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.375357 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561df4eb-a62d-4ce9-bf47-79598eac0eb1-secret-volume\") pod \"collect-profiles-29424060-qd945\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.376150 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561df4eb-a62d-4ce9-bf47-79598eac0eb1-config-volume\") pod \"collect-profiles-29424060-qd945\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.376228 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt66j\" (UniqueName: \"kubernetes.io/projected/561df4eb-a62d-4ce9-bf47-79598eac0eb1-kube-api-access-qt66j\") pod \"collect-profiles-29424060-qd945\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.390908 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561df4eb-a62d-4ce9-bf47-79598eac0eb1-secret-volume\") pod \"collect-profiles-29424060-qd945\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.392863 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt66j\" (UniqueName: \"kubernetes.io/projected/561df4eb-a62d-4ce9-bf47-79598eac0eb1-kube-api-access-qt66j\") pod \"collect-profiles-29424060-qd945\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.491586 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:00 crc kubenswrapper[4992]: W1211 09:00:00.947625 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod561df4eb_a62d_4ce9_bf47_79598eac0eb1.slice/crio-1efeea60de8e90856232b5f66ce01c6235c9044f6b26bc4ea9cf05f6d17de43b WatchSource:0}: Error finding container 1efeea60de8e90856232b5f66ce01c6235c9044f6b26bc4ea9cf05f6d17de43b: Status 404 returned error can't find the container with id 1efeea60de8e90856232b5f66ce01c6235c9044f6b26bc4ea9cf05f6d17de43b Dec 11 09:00:00 crc kubenswrapper[4992]: I1211 09:00:00.949792 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945"] Dec 11 09:00:01 crc kubenswrapper[4992]: I1211 09:00:01.374450 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" event={"ID":"561df4eb-a62d-4ce9-bf47-79598eac0eb1","Type":"ContainerStarted","Data":"1efeea60de8e90856232b5f66ce01c6235c9044f6b26bc4ea9cf05f6d17de43b"} Dec 11 09:00:05 crc kubenswrapper[4992]: I1211 09:00:05.379337 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:00:05 crc kubenswrapper[4992]: I1211 09:00:05.380076 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:00:06 crc kubenswrapper[4992]: I1211 09:00:06.425909 4992 generic.go:334] "Generic (PLEG): container finished" podID="a47b817b-7906-4327-ba35-740815f4c02c" containerID="318473ffa023d416284a7cbeb19e1f64ea78a5373d2823fcc27da56c6db70eb7" exitCode=0 Dec 11 09:00:06 crc kubenswrapper[4992]: I1211 09:00:06.426684 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" event={"ID":"a47b817b-7906-4327-ba35-740815f4c02c","Type":"ContainerDied","Data":"318473ffa023d416284a7cbeb19e1f64ea78a5373d2823fcc27da56c6db70eb7"} Dec 11 09:00:06 crc kubenswrapper[4992]: I1211 09:00:06.430089 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" event={"ID":"561df4eb-a62d-4ce9-bf47-79598eac0eb1","Type":"ContainerStarted","Data":"46ac1f3f51e508ca876382c1a50b83a2685fa54ce1fa01b72f28a9b2930e1799"} Dec 11 09:00:06 crc kubenswrapper[4992]: I1211 09:00:06.470763 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" podStartSLOduration=6.470741317 podStartE2EDuration="6.470741317s" podCreationTimestamp="2025-12-11 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:00:06.466904444 +0000 UTC m=+2230.726378380" watchObservedRunningTime="2025-12-11 09:00:06.470741317 +0000 UTC m=+2230.730215243" Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.441548 4992 generic.go:334] "Generic (PLEG): container finished" podID="561df4eb-a62d-4ce9-bf47-79598eac0eb1" containerID="46ac1f3f51e508ca876382c1a50b83a2685fa54ce1fa01b72f28a9b2930e1799" exitCode=0 Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.441623 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" event={"ID":"561df4eb-a62d-4ce9-bf47-79598eac0eb1","Type":"ContainerDied","Data":"46ac1f3f51e508ca876382c1a50b83a2685fa54ce1fa01b72f28a9b2930e1799"} Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.837548 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.930420 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a47b817b-7906-4327-ba35-740815f4c02c-ovncontroller-config-0\") pod \"a47b817b-7906-4327-ba35-740815f4c02c\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.930497 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ssh-key\") pod \"a47b817b-7906-4327-ba35-740815f4c02c\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.930579 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5r9m\" (UniqueName: \"kubernetes.io/projected/a47b817b-7906-4327-ba35-740815f4c02c-kube-api-access-d5r9m\") pod \"a47b817b-7906-4327-ba35-740815f4c02c\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.930704 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-inventory\") pod \"a47b817b-7906-4327-ba35-740815f4c02c\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.931319 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ovn-combined-ca-bundle\") pod \"a47b817b-7906-4327-ba35-740815f4c02c\" (UID: \"a47b817b-7906-4327-ba35-740815f4c02c\") " Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.935880 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a47b817b-7906-4327-ba35-740815f4c02c-kube-api-access-d5r9m" (OuterVolumeSpecName: "kube-api-access-d5r9m") pod "a47b817b-7906-4327-ba35-740815f4c02c" (UID: "a47b817b-7906-4327-ba35-740815f4c02c"). InnerVolumeSpecName "kube-api-access-d5r9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.936265 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a47b817b-7906-4327-ba35-740815f4c02c" (UID: "a47b817b-7906-4327-ba35-740815f4c02c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.957232 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a47b817b-7906-4327-ba35-740815f4c02c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a47b817b-7906-4327-ba35-740815f4c02c" (UID: "a47b817b-7906-4327-ba35-740815f4c02c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.961397 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a47b817b-7906-4327-ba35-740815f4c02c" (UID: "a47b817b-7906-4327-ba35-740815f4c02c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:00:07 crc kubenswrapper[4992]: I1211 09:00:07.965062 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-inventory" (OuterVolumeSpecName: "inventory") pod "a47b817b-7906-4327-ba35-740815f4c02c" (UID: "a47b817b-7906-4327-ba35-740815f4c02c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.033898 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5r9m\" (UniqueName: \"kubernetes.io/projected/a47b817b-7906-4327-ba35-740815f4c02c-kube-api-access-d5r9m\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.033938 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.033950 4992 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.033961 4992 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a47b817b-7906-4327-ba35-740815f4c02c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.033970 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a47b817b-7906-4327-ba35-740815f4c02c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.452956 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" event={"ID":"a47b817b-7906-4327-ba35-740815f4c02c","Type":"ContainerDied","Data":"bce5ab064ed5fc760e0396c4f6c0a793c6f4d8ed01ae026b0a4f8ea2f2866959"} Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.452991 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-slpxr" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.453005 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bce5ab064ed5fc760e0396c4f6c0a793c6f4d8ed01ae026b0a4f8ea2f2866959" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.624675 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml"] Dec 11 09:00:08 crc kubenswrapper[4992]: E1211 09:00:08.625281 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a47b817b-7906-4327-ba35-740815f4c02c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.625355 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="a47b817b-7906-4327-ba35-740815f4c02c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.625611 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="a47b817b-7906-4327-ba35-740815f4c02c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.627330 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.631296 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.631332 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.631533 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.632014 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.633361 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.633384 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.635304 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml"] Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.750866 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.750923 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.750965 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.750986 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r8h8\" (UniqueName: \"kubernetes.io/projected/dd42fab7-63a0-4b66-8264-335d337ec7b3-kube-api-access-7r8h8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.751015 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.751037 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.826229 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.852399 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.852752 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.852902 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.852986 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r8h8\" (UniqueName: \"kubernetes.io/projected/dd42fab7-63a0-4b66-8264-335d337ec7b3-kube-api-access-7r8h8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.853086 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.853170 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.858771 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.859475 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.861840 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.863226 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.863351 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.872507 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r8h8\" (UniqueName: \"kubernetes.io/projected/dd42fab7-63a0-4b66-8264-335d337ec7b3-kube-api-access-7r8h8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.954609 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt66j\" (UniqueName: \"kubernetes.io/projected/561df4eb-a62d-4ce9-bf47-79598eac0eb1-kube-api-access-qt66j\") pod \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.954736 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561df4eb-a62d-4ce9-bf47-79598eac0eb1-config-volume\") pod \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.954906 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561df4eb-a62d-4ce9-bf47-79598eac0eb1-secret-volume\") pod \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\" (UID: \"561df4eb-a62d-4ce9-bf47-79598eac0eb1\") " Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.955715 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/561df4eb-a62d-4ce9-bf47-79598eac0eb1-config-volume" (OuterVolumeSpecName: "config-volume") pod "561df4eb-a62d-4ce9-bf47-79598eac0eb1" (UID: "561df4eb-a62d-4ce9-bf47-79598eac0eb1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.958755 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.959259 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/561df4eb-a62d-4ce9-bf47-79598eac0eb1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "561df4eb-a62d-4ce9-bf47-79598eac0eb1" (UID: "561df4eb-a62d-4ce9-bf47-79598eac0eb1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:00:08 crc kubenswrapper[4992]: I1211 09:00:08.960159 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/561df4eb-a62d-4ce9-bf47-79598eac0eb1-kube-api-access-qt66j" (OuterVolumeSpecName: "kube-api-access-qt66j") pod "561df4eb-a62d-4ce9-bf47-79598eac0eb1" (UID: "561df4eb-a62d-4ce9-bf47-79598eac0eb1"). InnerVolumeSpecName "kube-api-access-qt66j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:00:09 crc kubenswrapper[4992]: I1211 09:00:09.057690 4992 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/561df4eb-a62d-4ce9-bf47-79598eac0eb1-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:09 crc kubenswrapper[4992]: I1211 09:00:09.057730 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt66j\" (UniqueName: \"kubernetes.io/projected/561df4eb-a62d-4ce9-bf47-79598eac0eb1-kube-api-access-qt66j\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:09 crc kubenswrapper[4992]: I1211 09:00:09.057740 4992 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/561df4eb-a62d-4ce9-bf47-79598eac0eb1-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:09 crc kubenswrapper[4992]: I1211 09:00:09.194131 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq"] Dec 11 09:00:09 crc kubenswrapper[4992]: I1211 09:00:09.202385 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424015-cwvkq"] Dec 11 09:00:09 crc kubenswrapper[4992]: I1211 09:00:09.461873 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" event={"ID":"561df4eb-a62d-4ce9-bf47-79598eac0eb1","Type":"ContainerDied","Data":"1efeea60de8e90856232b5f66ce01c6235c9044f6b26bc4ea9cf05f6d17de43b"} Dec 11 09:00:09 crc kubenswrapper[4992]: I1211 09:00:09.462171 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1efeea60de8e90856232b5f66ce01c6235c9044f6b26bc4ea9cf05f6d17de43b" Dec 11 09:00:09 crc kubenswrapper[4992]: I1211 09:00:09.461921 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424060-qd945" Dec 11 09:00:09 crc kubenswrapper[4992]: I1211 09:00:09.472191 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml"] Dec 11 09:00:10 crc kubenswrapper[4992]: I1211 09:00:10.119113 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7656745a-706d-4652-9db6-e94237d4999c" path="/var/lib/kubelet/pods/7656745a-706d-4652-9db6-e94237d4999c/volumes" Dec 11 09:00:10 crc kubenswrapper[4992]: I1211 09:00:10.473275 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" event={"ID":"dd42fab7-63a0-4b66-8264-335d337ec7b3","Type":"ContainerStarted","Data":"8e5456571d2657a3f2b4d0a0a6eee3dfdbae63c4e251b9fb350bc946da609c84"} Dec 11 09:00:11 crc kubenswrapper[4992]: I1211 09:00:11.906018 4992 scope.go:117] "RemoveContainer" containerID="e87c6cb69fa890883f967f56ad79b6d456b8721a4dd9a7fb5b3ef71cf0a40010" Dec 11 09:00:14 crc kubenswrapper[4992]: I1211 09:00:14.519948 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" event={"ID":"dd42fab7-63a0-4b66-8264-335d337ec7b3","Type":"ContainerStarted","Data":"6ded86d715ad9ebe6d147b26a795551b276913a5fcf9091b5aa7a5fef10774d3"} Dec 11 09:00:28 crc kubenswrapper[4992]: I1211 09:00:28.776457 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" podStartSLOduration=16.426658195999998 podStartE2EDuration="20.776432461s" podCreationTimestamp="2025-12-11 09:00:08 +0000 UTC" firstStartedPulling="2025-12-11 09:00:09.487471829 +0000 UTC m=+2233.746945755" lastFinishedPulling="2025-12-11 09:00:13.837246094 +0000 UTC m=+2238.096720020" observedRunningTime="2025-12-11 09:00:14.544707838 +0000 UTC m=+2238.804181794" watchObservedRunningTime="2025-12-11 09:00:28.776432461 +0000 UTC m=+2253.035906387" Dec 11 09:00:28 crc kubenswrapper[4992]: I1211 09:00:28.786216 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ngmjn"] Dec 11 09:00:28 crc kubenswrapper[4992]: E1211 09:00:28.786759 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="561df4eb-a62d-4ce9-bf47-79598eac0eb1" containerName="collect-profiles" Dec 11 09:00:28 crc kubenswrapper[4992]: I1211 09:00:28.786782 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="561df4eb-a62d-4ce9-bf47-79598eac0eb1" containerName="collect-profiles" Dec 11 09:00:28 crc kubenswrapper[4992]: I1211 09:00:28.787036 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="561df4eb-a62d-4ce9-bf47-79598eac0eb1" containerName="collect-profiles" Dec 11 09:00:28 crc kubenswrapper[4992]: I1211 09:00:28.788919 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:28 crc kubenswrapper[4992]: I1211 09:00:28.801429 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngmjn"] Dec 11 09:00:28 crc kubenswrapper[4992]: I1211 09:00:28.958891 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66t7v\" (UniqueName: \"kubernetes.io/projected/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-kube-api-access-66t7v\") pod \"redhat-marketplace-ngmjn\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:28 crc kubenswrapper[4992]: I1211 09:00:28.958963 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-catalog-content\") pod \"redhat-marketplace-ngmjn\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:28 crc kubenswrapper[4992]: I1211 09:00:28.959008 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-utilities\") pod \"redhat-marketplace-ngmjn\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:29 crc kubenswrapper[4992]: I1211 09:00:29.060480 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66t7v\" (UniqueName: \"kubernetes.io/projected/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-kube-api-access-66t7v\") pod \"redhat-marketplace-ngmjn\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:29 crc kubenswrapper[4992]: I1211 09:00:29.060536 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-catalog-content\") pod \"redhat-marketplace-ngmjn\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:29 crc kubenswrapper[4992]: I1211 09:00:29.060585 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-utilities\") pod \"redhat-marketplace-ngmjn\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:29 crc kubenswrapper[4992]: I1211 09:00:29.061202 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-utilities\") pod \"redhat-marketplace-ngmjn\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:29 crc kubenswrapper[4992]: I1211 09:00:29.061329 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-catalog-content\") pod \"redhat-marketplace-ngmjn\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:29 crc kubenswrapper[4992]: I1211 09:00:29.090396 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66t7v\" (UniqueName: \"kubernetes.io/projected/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-kube-api-access-66t7v\") pod \"redhat-marketplace-ngmjn\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:29 crc kubenswrapper[4992]: I1211 09:00:29.112846 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:29 crc kubenswrapper[4992]: I1211 09:00:29.664681 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngmjn"] Dec 11 09:00:29 crc kubenswrapper[4992]: W1211 09:00:29.673965 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc31e6d8f_b255_48b0_a3c0_4ce16aa19475.slice/crio-f3579ae2ba5dbdf3599c2fa5f4da50d37efd67c119c21969ab9a8909bf42f0ff WatchSource:0}: Error finding container f3579ae2ba5dbdf3599c2fa5f4da50d37efd67c119c21969ab9a8909bf42f0ff: Status 404 returned error can't find the container with id f3579ae2ba5dbdf3599c2fa5f4da50d37efd67c119c21969ab9a8909bf42f0ff Dec 11 09:00:30 crc kubenswrapper[4992]: I1211 09:00:30.657707 4992 generic.go:334] "Generic (PLEG): container finished" podID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerID="3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046" exitCode=0 Dec 11 09:00:30 crc kubenswrapper[4992]: I1211 09:00:30.657746 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngmjn" event={"ID":"c31e6d8f-b255-48b0-a3c0-4ce16aa19475","Type":"ContainerDied","Data":"3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046"} Dec 11 09:00:30 crc kubenswrapper[4992]: I1211 09:00:30.657772 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngmjn" event={"ID":"c31e6d8f-b255-48b0-a3c0-4ce16aa19475","Type":"ContainerStarted","Data":"f3579ae2ba5dbdf3599c2fa5f4da50d37efd67c119c21969ab9a8909bf42f0ff"} Dec 11 09:00:32 crc kubenswrapper[4992]: I1211 09:00:32.677120 4992 generic.go:334] "Generic (PLEG): container finished" podID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerID="e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff" exitCode=0 Dec 11 09:00:32 crc kubenswrapper[4992]: I1211 09:00:32.677288 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngmjn" event={"ID":"c31e6d8f-b255-48b0-a3c0-4ce16aa19475","Type":"ContainerDied","Data":"e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff"} Dec 11 09:00:33 crc kubenswrapper[4992]: I1211 09:00:33.689009 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngmjn" event={"ID":"c31e6d8f-b255-48b0-a3c0-4ce16aa19475","Type":"ContainerStarted","Data":"b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf"} Dec 11 09:00:33 crc kubenswrapper[4992]: I1211 09:00:33.707388 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ngmjn" podStartSLOduration=3.073740329 podStartE2EDuration="5.707369283s" podCreationTimestamp="2025-12-11 09:00:28 +0000 UTC" firstStartedPulling="2025-12-11 09:00:30.659523732 +0000 UTC m=+2254.918997658" lastFinishedPulling="2025-12-11 09:00:33.293152686 +0000 UTC m=+2257.552626612" observedRunningTime="2025-12-11 09:00:33.706076731 +0000 UTC m=+2257.965550677" watchObservedRunningTime="2025-12-11 09:00:33.707369283 +0000 UTC m=+2257.966843209" Dec 11 09:00:35 crc kubenswrapper[4992]: I1211 09:00:35.379151 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:00:35 crc kubenswrapper[4992]: I1211 09:00:35.379593 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:00:39 crc kubenswrapper[4992]: I1211 09:00:39.113434 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:39 crc kubenswrapper[4992]: I1211 09:00:39.114593 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:39 crc kubenswrapper[4992]: I1211 09:00:39.158732 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:39 crc kubenswrapper[4992]: I1211 09:00:39.798992 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:39 crc kubenswrapper[4992]: I1211 09:00:39.844145 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngmjn"] Dec 11 09:00:41 crc kubenswrapper[4992]: I1211 09:00:41.757797 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ngmjn" podUID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerName="registry-server" containerID="cri-o://b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf" gracePeriod=2 Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.223817 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.277181 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66t7v\" (UniqueName: \"kubernetes.io/projected/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-kube-api-access-66t7v\") pod \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.277328 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-utilities\") pod \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.277502 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-catalog-content\") pod \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\" (UID: \"c31e6d8f-b255-48b0-a3c0-4ce16aa19475\") " Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.278315 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-utilities" (OuterVolumeSpecName: "utilities") pod "c31e6d8f-b255-48b0-a3c0-4ce16aa19475" (UID: "c31e6d8f-b255-48b0-a3c0-4ce16aa19475"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.284583 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-kube-api-access-66t7v" (OuterVolumeSpecName: "kube-api-access-66t7v") pod "c31e6d8f-b255-48b0-a3c0-4ce16aa19475" (UID: "c31e6d8f-b255-48b0-a3c0-4ce16aa19475"). InnerVolumeSpecName "kube-api-access-66t7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.303260 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c31e6d8f-b255-48b0-a3c0-4ce16aa19475" (UID: "c31e6d8f-b255-48b0-a3c0-4ce16aa19475"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.378798 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66t7v\" (UniqueName: \"kubernetes.io/projected/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-kube-api-access-66t7v\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.378833 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.378843 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e6d8f-b255-48b0-a3c0-4ce16aa19475-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.768379 4992 generic.go:334] "Generic (PLEG): container finished" podID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerID="b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf" exitCode=0 Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.768431 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngmjn" event={"ID":"c31e6d8f-b255-48b0-a3c0-4ce16aa19475","Type":"ContainerDied","Data":"b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf"} Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.768444 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ngmjn" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.768465 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ngmjn" event={"ID":"c31e6d8f-b255-48b0-a3c0-4ce16aa19475","Type":"ContainerDied","Data":"f3579ae2ba5dbdf3599c2fa5f4da50d37efd67c119c21969ab9a8909bf42f0ff"} Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.768484 4992 scope.go:117] "RemoveContainer" containerID="b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.795173 4992 scope.go:117] "RemoveContainer" containerID="e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.805479 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngmjn"] Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.814984 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ngmjn"] Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.845184 4992 scope.go:117] "RemoveContainer" containerID="3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.865821 4992 scope.go:117] "RemoveContainer" containerID="b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf" Dec 11 09:00:42 crc kubenswrapper[4992]: E1211 09:00:42.866233 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf\": container with ID starting with b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf not found: ID does not exist" containerID="b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.866275 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf"} err="failed to get container status \"b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf\": rpc error: code = NotFound desc = could not find container \"b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf\": container with ID starting with b30ef852f1b9789c53c463ee9a51c281995f1a95ce7670d4c44edb5658de3fdf not found: ID does not exist" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.866301 4992 scope.go:117] "RemoveContainer" containerID="e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff" Dec 11 09:00:42 crc kubenswrapper[4992]: E1211 09:00:42.866593 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff\": container with ID starting with e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff not found: ID does not exist" containerID="e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.866652 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff"} err="failed to get container status \"e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff\": rpc error: code = NotFound desc = could not find container \"e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff\": container with ID starting with e3c9bd77f06d8e1c3100a2352d842397fda602e7afcb3664fec7e3e396f5f5ff not found: ID does not exist" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.866682 4992 scope.go:117] "RemoveContainer" containerID="3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046" Dec 11 09:00:42 crc kubenswrapper[4992]: E1211 09:00:42.866946 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046\": container with ID starting with 3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046 not found: ID does not exist" containerID="3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046" Dec 11 09:00:42 crc kubenswrapper[4992]: I1211 09:00:42.866970 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046"} err="failed to get container status \"3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046\": rpc error: code = NotFound desc = could not find container \"3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046\": container with ID starting with 3a0bc12d976f6dd4dbe39ae26be17a33599cdbf4eaae0eee57f868d5cfc3b046 not found: ID does not exist" Dec 11 09:00:44 crc kubenswrapper[4992]: I1211 09:00:44.105152 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" path="/var/lib/kubelet/pods/c31e6d8f-b255-48b0-a3c0-4ce16aa19475/volumes" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.147479 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29424061-lk7db"] Dec 11 09:01:00 crc kubenswrapper[4992]: E1211 09:01:00.148762 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerName="extract-content" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.148786 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerName="extract-content" Dec 11 09:01:00 crc kubenswrapper[4992]: E1211 09:01:00.148801 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerName="registry-server" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.148810 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerName="registry-server" Dec 11 09:01:00 crc kubenswrapper[4992]: E1211 09:01:00.148851 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerName="extract-utilities" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.148864 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerName="extract-utilities" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.149167 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31e6d8f-b255-48b0-a3c0-4ce16aa19475" containerName="registry-server" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.150159 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.158963 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29424061-lk7db"] Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.335342 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-config-data\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.336082 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-fernet-keys\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.336172 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vn96\" (UniqueName: \"kubernetes.io/projected/7e48de49-fca3-4449-876a-2fafff903b2e-kube-api-access-5vn96\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.336345 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-combined-ca-bundle\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.438444 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-fernet-keys\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.438516 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vn96\" (UniqueName: \"kubernetes.io/projected/7e48de49-fca3-4449-876a-2fafff903b2e-kube-api-access-5vn96\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.438573 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-combined-ca-bundle\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.438650 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-config-data\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.444872 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-combined-ca-bundle\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.445812 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-fernet-keys\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.446167 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-config-data\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.461935 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vn96\" (UniqueName: \"kubernetes.io/projected/7e48de49-fca3-4449-876a-2fafff903b2e-kube-api-access-5vn96\") pod \"keystone-cron-29424061-lk7db\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.495190 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:00 crc kubenswrapper[4992]: I1211 09:01:00.972549 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29424061-lk7db"] Dec 11 09:01:01 crc kubenswrapper[4992]: I1211 09:01:01.947957 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424061-lk7db" event={"ID":"7e48de49-fca3-4449-876a-2fafff903b2e","Type":"ContainerStarted","Data":"6ccb6e2d8f27181ef94ff2da1697599e9036c7dd08c983225622426c82ae2638"} Dec 11 09:01:01 crc kubenswrapper[4992]: I1211 09:01:01.948310 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424061-lk7db" event={"ID":"7e48de49-fca3-4449-876a-2fafff903b2e","Type":"ContainerStarted","Data":"ce02dfc3bd6c609d5cdac38932b061694a7fd8e684877030ebd7a9e77d3408b1"} Dec 11 09:01:01 crc kubenswrapper[4992]: I1211 09:01:01.965441 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29424061-lk7db" podStartSLOduration=1.9654170340000001 podStartE2EDuration="1.965417034s" podCreationTimestamp="2025-12-11 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:01:01.961444177 +0000 UTC m=+2286.220918103" watchObservedRunningTime="2025-12-11 09:01:01.965417034 +0000 UTC m=+2286.224890960" Dec 11 09:01:04 crc kubenswrapper[4992]: I1211 09:01:04.986235 4992 generic.go:334] "Generic (PLEG): container finished" podID="dd42fab7-63a0-4b66-8264-335d337ec7b3" containerID="6ded86d715ad9ebe6d147b26a795551b276913a5fcf9091b5aa7a5fef10774d3" exitCode=0 Dec 11 09:01:04 crc kubenswrapper[4992]: I1211 09:01:04.986353 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" event={"ID":"dd42fab7-63a0-4b66-8264-335d337ec7b3","Type":"ContainerDied","Data":"6ded86d715ad9ebe6d147b26a795551b276913a5fcf9091b5aa7a5fef10774d3"} Dec 11 09:01:04 crc kubenswrapper[4992]: I1211 09:01:04.989500 4992 generic.go:334] "Generic (PLEG): container finished" podID="7e48de49-fca3-4449-876a-2fafff903b2e" containerID="6ccb6e2d8f27181ef94ff2da1697599e9036c7dd08c983225622426c82ae2638" exitCode=0 Dec 11 09:01:04 crc kubenswrapper[4992]: I1211 09:01:04.989541 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424061-lk7db" event={"ID":"7e48de49-fca3-4449-876a-2fafff903b2e","Type":"ContainerDied","Data":"6ccb6e2d8f27181ef94ff2da1697599e9036c7dd08c983225622426c82ae2638"} Dec 11 09:01:05 crc kubenswrapper[4992]: I1211 09:01:05.378582 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:01:05 crc kubenswrapper[4992]: I1211 09:01:05.378993 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:01:05 crc kubenswrapper[4992]: I1211 09:01:05.379045 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 09:01:05 crc kubenswrapper[4992]: I1211 09:01:05.379944 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 09:01:05 crc kubenswrapper[4992]: I1211 09:01:05.380017 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" gracePeriod=600 Dec 11 09:01:05 crc kubenswrapper[4992]: E1211 09:01:05.627768 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.001670 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" exitCode=0 Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.001742 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f"} Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.001799 4992 scope.go:117] "RemoveContainer" containerID="3a3dc42b5a3cd43c62970b42a2e0157ec46f68f2e53bb7c9edf2af183a976943" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.002687 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:01:06 crc kubenswrapper[4992]: E1211 09:01:06.003006 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.420901 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.429573 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.565884 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vn96\" (UniqueName: \"kubernetes.io/projected/7e48de49-fca3-4449-876a-2fafff903b2e-kube-api-access-5vn96\") pod \"7e48de49-fca3-4449-876a-2fafff903b2e\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.565943 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-combined-ca-bundle\") pod \"7e48de49-fca3-4449-876a-2fafff903b2e\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.566023 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-ovn-metadata-agent-neutron-config-0\") pod \"dd42fab7-63a0-4b66-8264-335d337ec7b3\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.566054 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-ssh-key\") pod \"dd42fab7-63a0-4b66-8264-335d337ec7b3\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.566167 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-fernet-keys\") pod \"7e48de49-fca3-4449-876a-2fafff903b2e\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.566197 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-config-data\") pod \"7e48de49-fca3-4449-876a-2fafff903b2e\" (UID: \"7e48de49-fca3-4449-876a-2fafff903b2e\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.566281 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r8h8\" (UniqueName: \"kubernetes.io/projected/dd42fab7-63a0-4b66-8264-335d337ec7b3-kube-api-access-7r8h8\") pod \"dd42fab7-63a0-4b66-8264-335d337ec7b3\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.566311 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-metadata-combined-ca-bundle\") pod \"dd42fab7-63a0-4b66-8264-335d337ec7b3\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.566351 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-nova-metadata-neutron-config-0\") pod \"dd42fab7-63a0-4b66-8264-335d337ec7b3\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.566411 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-inventory\") pod \"dd42fab7-63a0-4b66-8264-335d337ec7b3\" (UID: \"dd42fab7-63a0-4b66-8264-335d337ec7b3\") " Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.571906 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7e48de49-fca3-4449-876a-2fafff903b2e" (UID: "7e48de49-fca3-4449-876a-2fafff903b2e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.572098 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd42fab7-63a0-4b66-8264-335d337ec7b3-kube-api-access-7r8h8" (OuterVolumeSpecName: "kube-api-access-7r8h8") pod "dd42fab7-63a0-4b66-8264-335d337ec7b3" (UID: "dd42fab7-63a0-4b66-8264-335d337ec7b3"). InnerVolumeSpecName "kube-api-access-7r8h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.572846 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e48de49-fca3-4449-876a-2fafff903b2e-kube-api-access-5vn96" (OuterVolumeSpecName: "kube-api-access-5vn96") pod "7e48de49-fca3-4449-876a-2fafff903b2e" (UID: "7e48de49-fca3-4449-876a-2fafff903b2e"). InnerVolumeSpecName "kube-api-access-5vn96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.579352 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "dd42fab7-63a0-4b66-8264-335d337ec7b3" (UID: "dd42fab7-63a0-4b66-8264-335d337ec7b3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.594761 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "dd42fab7-63a0-4b66-8264-335d337ec7b3" (UID: "dd42fab7-63a0-4b66-8264-335d337ec7b3"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.597202 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e48de49-fca3-4449-876a-2fafff903b2e" (UID: "7e48de49-fca3-4449-876a-2fafff903b2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.598978 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-inventory" (OuterVolumeSpecName: "inventory") pod "dd42fab7-63a0-4b66-8264-335d337ec7b3" (UID: "dd42fab7-63a0-4b66-8264-335d337ec7b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.600434 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dd42fab7-63a0-4b66-8264-335d337ec7b3" (UID: "dd42fab7-63a0-4b66-8264-335d337ec7b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.600776 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "dd42fab7-63a0-4b66-8264-335d337ec7b3" (UID: "dd42fab7-63a0-4b66-8264-335d337ec7b3"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.625837 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-config-data" (OuterVolumeSpecName: "config-data") pod "7e48de49-fca3-4449-876a-2fafff903b2e" (UID: "7e48de49-fca3-4449-876a-2fafff903b2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.668293 4992 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.668723 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.668792 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vn96\" (UniqueName: \"kubernetes.io/projected/7e48de49-fca3-4449-876a-2fafff903b2e-kube-api-access-5vn96\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.668862 4992 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.668926 4992 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.668989 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.669053 4992 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.669114 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e48de49-fca3-4449-876a-2fafff903b2e-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.669203 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r8h8\" (UniqueName: \"kubernetes.io/projected/dd42fab7-63a0-4b66-8264-335d337ec7b3-kube-api-access-7r8h8\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:06 crc kubenswrapper[4992]: I1211 09:01:06.669264 4992 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd42fab7-63a0-4b66-8264-335d337ec7b3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.017897 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.017914 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml" event={"ID":"dd42fab7-63a0-4b66-8264-335d337ec7b3","Type":"ContainerDied","Data":"8e5456571d2657a3f2b4d0a0a6eee3dfdbae63c4e251b9fb350bc946da609c84"} Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.018945 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5456571d2657a3f2b4d0a0a6eee3dfdbae63c4e251b9fb350bc946da609c84" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.021063 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424061-lk7db" event={"ID":"7e48de49-fca3-4449-876a-2fafff903b2e","Type":"ContainerDied","Data":"ce02dfc3bd6c609d5cdac38932b061694a7fd8e684877030ebd7a9e77d3408b1"} Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.021095 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce02dfc3bd6c609d5cdac38932b061694a7fd8e684877030ebd7a9e77d3408b1" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.021242 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424061-lk7db" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.122003 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln"] Dec 11 09:01:07 crc kubenswrapper[4992]: E1211 09:01:07.122472 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd42fab7-63a0-4b66-8264-335d337ec7b3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.122488 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd42fab7-63a0-4b66-8264-335d337ec7b3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 11 09:01:07 crc kubenswrapper[4992]: E1211 09:01:07.122508 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e48de49-fca3-4449-876a-2fafff903b2e" containerName="keystone-cron" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.122514 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e48de49-fca3-4449-876a-2fafff903b2e" containerName="keystone-cron" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.122715 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e48de49-fca3-4449-876a-2fafff903b2e" containerName="keystone-cron" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.122741 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd42fab7-63a0-4b66-8264-335d337ec7b3" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.123371 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.133536 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln"] Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.171796 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.171937 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.171985 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.172022 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.172274 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.280768 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.280918 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.280962 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.281042 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.281122 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww8l6\" (UniqueName: \"kubernetes.io/projected/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-kube-api-access-ww8l6\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.383020 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.383130 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.383172 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.383298 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.383370 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww8l6\" (UniqueName: \"kubernetes.io/projected/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-kube-api-access-ww8l6\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.388172 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.388172 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.388661 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.390238 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.406015 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww8l6\" (UniqueName: \"kubernetes.io/projected/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-kube-api-access-ww8l6\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-r5lln\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.486395 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.977690 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln"] Dec 11 09:01:07 crc kubenswrapper[4992]: I1211 09:01:07.981102 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 09:01:08 crc kubenswrapper[4992]: I1211 09:01:08.032679 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" event={"ID":"c8aeb03b-f704-4b27-8eb5-afeac15bcd18","Type":"ContainerStarted","Data":"02a636c112dd3b11aac56c897ad4cbb7b9adf52089ee0de1f2aafc43a2309989"} Dec 11 09:01:11 crc kubenswrapper[4992]: I1211 09:01:11.060971 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" event={"ID":"c8aeb03b-f704-4b27-8eb5-afeac15bcd18","Type":"ContainerStarted","Data":"8dac81a76e7a723579b50b6bf2116ff9223770c4b4ff078cbf50fa12c0784b47"} Dec 11 09:01:11 crc kubenswrapper[4992]: I1211 09:01:11.079437 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" podStartSLOduration=1.454333037 podStartE2EDuration="4.079418584s" podCreationTimestamp="2025-12-11 09:01:07 +0000 UTC" firstStartedPulling="2025-12-11 09:01:07.980914459 +0000 UTC m=+2292.240388385" lastFinishedPulling="2025-12-11 09:01:10.605999986 +0000 UTC m=+2294.865473932" observedRunningTime="2025-12-11 09:01:11.074227737 +0000 UTC m=+2295.333701663" watchObservedRunningTime="2025-12-11 09:01:11.079418584 +0000 UTC m=+2295.338892510" Dec 11 09:01:21 crc kubenswrapper[4992]: I1211 09:01:21.095515 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:01:21 crc kubenswrapper[4992]: E1211 09:01:21.096340 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:01:33 crc kubenswrapper[4992]: I1211 09:01:33.094428 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:01:33 crc kubenswrapper[4992]: E1211 09:01:33.095164 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.043848 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bpd7t"] Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.046909 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.055371 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpd7t"] Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.228974 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-utilities\") pod \"community-operators-bpd7t\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.229104 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdhhc\" (UniqueName: \"kubernetes.io/projected/24a57360-6e70-4adc-806f-47adda947e64-kube-api-access-pdhhc\") pod \"community-operators-bpd7t\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.229354 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-catalog-content\") pod \"community-operators-bpd7t\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.331556 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-utilities\") pod \"community-operators-bpd7t\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.331702 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdhhc\" (UniqueName: \"kubernetes.io/projected/24a57360-6e70-4adc-806f-47adda947e64-kube-api-access-pdhhc\") pod \"community-operators-bpd7t\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.331747 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-catalog-content\") pod \"community-operators-bpd7t\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.332187 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-utilities\") pod \"community-operators-bpd7t\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.332286 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-catalog-content\") pod \"community-operators-bpd7t\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.371788 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdhhc\" (UniqueName: \"kubernetes.io/projected/24a57360-6e70-4adc-806f-47adda947e64-kube-api-access-pdhhc\") pod \"community-operators-bpd7t\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:35 crc kubenswrapper[4992]: I1211 09:01:35.669097 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:01:36 crc kubenswrapper[4992]: W1211 09:01:36.164672 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24a57360_6e70_4adc_806f_47adda947e64.slice/crio-096b99e5b1d125d7fdc07820da12ed9bc9324948285dd98d1e751d53f555af6c WatchSource:0}: Error finding container 096b99e5b1d125d7fdc07820da12ed9bc9324948285dd98d1e751d53f555af6c: Status 404 returned error can't find the container with id 096b99e5b1d125d7fdc07820da12ed9bc9324948285dd98d1e751d53f555af6c Dec 11 09:01:36 crc kubenswrapper[4992]: I1211 09:01:36.164816 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bpd7t"] Dec 11 09:01:36 crc kubenswrapper[4992]: I1211 09:01:36.275193 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpd7t" event={"ID":"24a57360-6e70-4adc-806f-47adda947e64","Type":"ContainerStarted","Data":"096b99e5b1d125d7fdc07820da12ed9bc9324948285dd98d1e751d53f555af6c"} Dec 11 09:01:37 crc kubenswrapper[4992]: I1211 09:01:37.285128 4992 generic.go:334] "Generic (PLEG): container finished" podID="24a57360-6e70-4adc-806f-47adda947e64" containerID="a6d65b9047361b54bdf2882dc4c1d7db9c1f3a08f48a021a20ca2bd202719644" exitCode=0 Dec 11 09:01:37 crc kubenswrapper[4992]: I1211 09:01:37.285176 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpd7t" event={"ID":"24a57360-6e70-4adc-806f-47adda947e64","Type":"ContainerDied","Data":"a6d65b9047361b54bdf2882dc4c1d7db9c1f3a08f48a021a20ca2bd202719644"} Dec 11 09:01:40 crc kubenswrapper[4992]: I1211 09:01:40.310890 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpd7t" event={"ID":"24a57360-6e70-4adc-806f-47adda947e64","Type":"ContainerStarted","Data":"f8fc33e9575adb7202b0c3f6d47b9f0179f147864e75783ed67bd45539988f06"} Dec 11 09:01:41 crc kubenswrapper[4992]: I1211 09:01:41.321698 4992 generic.go:334] "Generic (PLEG): container finished" podID="24a57360-6e70-4adc-806f-47adda947e64" containerID="f8fc33e9575adb7202b0c3f6d47b9f0179f147864e75783ed67bd45539988f06" exitCode=0 Dec 11 09:01:41 crc kubenswrapper[4992]: I1211 09:01:41.321751 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpd7t" event={"ID":"24a57360-6e70-4adc-806f-47adda947e64","Type":"ContainerDied","Data":"f8fc33e9575adb7202b0c3f6d47b9f0179f147864e75783ed67bd45539988f06"} Dec 11 09:01:45 crc kubenswrapper[4992]: I1211 09:01:45.095192 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:01:45 crc kubenswrapper[4992]: E1211 09:01:45.096511 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:01:56 crc kubenswrapper[4992]: I1211 09:01:56.484680 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpd7t" event={"ID":"24a57360-6e70-4adc-806f-47adda947e64","Type":"ContainerStarted","Data":"ab3314b97128cb491bc4d26f31f0591b8f8f0cd7856aedce21139d2ea3fb8690"} Dec 11 09:01:56 crc kubenswrapper[4992]: I1211 09:01:56.509581 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bpd7t" podStartSLOduration=3.20078406 podStartE2EDuration="21.509562622s" podCreationTimestamp="2025-12-11 09:01:35 +0000 UTC" firstStartedPulling="2025-12-11 09:01:37.287000817 +0000 UTC m=+2321.546474743" lastFinishedPulling="2025-12-11 09:01:55.595779359 +0000 UTC m=+2339.855253305" observedRunningTime="2025-12-11 09:01:56.499514238 +0000 UTC m=+2340.758988184" watchObservedRunningTime="2025-12-11 09:01:56.509562622 +0000 UTC m=+2340.769036548" Dec 11 09:01:59 crc kubenswrapper[4992]: I1211 09:01:59.095405 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:01:59 crc kubenswrapper[4992]: E1211 09:01:59.096178 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:02:05 crc kubenswrapper[4992]: I1211 09:02:05.670123 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:02:05 crc kubenswrapper[4992]: I1211 09:02:05.670998 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:02:05 crc kubenswrapper[4992]: I1211 09:02:05.718832 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:02:06 crc kubenswrapper[4992]: I1211 09:02:06.632451 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:02:06 crc kubenswrapper[4992]: I1211 09:02:06.683003 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpd7t"] Dec 11 09:02:08 crc kubenswrapper[4992]: I1211 09:02:08.586259 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bpd7t" podUID="24a57360-6e70-4adc-806f-47adda947e64" containerName="registry-server" containerID="cri-o://ab3314b97128cb491bc4d26f31f0591b8f8f0cd7856aedce21139d2ea3fb8690" gracePeriod=2 Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.095250 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:02:11 crc kubenswrapper[4992]: E1211 09:02:11.095740 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.611908 4992 generic.go:334] "Generic (PLEG): container finished" podID="24a57360-6e70-4adc-806f-47adda947e64" containerID="ab3314b97128cb491bc4d26f31f0591b8f8f0cd7856aedce21139d2ea3fb8690" exitCode=0 Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.611978 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpd7t" event={"ID":"24a57360-6e70-4adc-806f-47adda947e64","Type":"ContainerDied","Data":"ab3314b97128cb491bc4d26f31f0591b8f8f0cd7856aedce21139d2ea3fb8690"} Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.842690 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.911917 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-utilities\") pod \"24a57360-6e70-4adc-806f-47adda947e64\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.912121 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdhhc\" (UniqueName: \"kubernetes.io/projected/24a57360-6e70-4adc-806f-47adda947e64-kube-api-access-pdhhc\") pod \"24a57360-6e70-4adc-806f-47adda947e64\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.912155 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-catalog-content\") pod \"24a57360-6e70-4adc-806f-47adda947e64\" (UID: \"24a57360-6e70-4adc-806f-47adda947e64\") " Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.912898 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-utilities" (OuterVolumeSpecName: "utilities") pod "24a57360-6e70-4adc-806f-47adda947e64" (UID: "24a57360-6e70-4adc-806f-47adda947e64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.919995 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a57360-6e70-4adc-806f-47adda947e64-kube-api-access-pdhhc" (OuterVolumeSpecName: "kube-api-access-pdhhc") pod "24a57360-6e70-4adc-806f-47adda947e64" (UID: "24a57360-6e70-4adc-806f-47adda947e64"). InnerVolumeSpecName "kube-api-access-pdhhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:02:11 crc kubenswrapper[4992]: I1211 09:02:11.963698 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24a57360-6e70-4adc-806f-47adda947e64" (UID: "24a57360-6e70-4adc-806f-47adda947e64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.013981 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.014030 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdhhc\" (UniqueName: \"kubernetes.io/projected/24a57360-6e70-4adc-806f-47adda947e64-kube-api-access-pdhhc\") on node \"crc\" DevicePath \"\"" Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.014043 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24a57360-6e70-4adc-806f-47adda947e64-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.624101 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bpd7t" event={"ID":"24a57360-6e70-4adc-806f-47adda947e64","Type":"ContainerDied","Data":"096b99e5b1d125d7fdc07820da12ed9bc9324948285dd98d1e751d53f555af6c"} Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.624146 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bpd7t" Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.624471 4992 scope.go:117] "RemoveContainer" containerID="ab3314b97128cb491bc4d26f31f0591b8f8f0cd7856aedce21139d2ea3fb8690" Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.655435 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bpd7t"] Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.665677 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bpd7t"] Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.665958 4992 scope.go:117] "RemoveContainer" containerID="f8fc33e9575adb7202b0c3f6d47b9f0179f147864e75783ed67bd45539988f06" Dec 11 09:02:12 crc kubenswrapper[4992]: I1211 09:02:12.691572 4992 scope.go:117] "RemoveContainer" containerID="a6d65b9047361b54bdf2882dc4c1d7db9c1f3a08f48a021a20ca2bd202719644" Dec 11 09:02:14 crc kubenswrapper[4992]: I1211 09:02:14.110278 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a57360-6e70-4adc-806f-47adda947e64" path="/var/lib/kubelet/pods/24a57360-6e70-4adc-806f-47adda947e64/volumes" Dec 11 09:02:24 crc kubenswrapper[4992]: I1211 09:02:24.095472 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:02:24 crc kubenswrapper[4992]: E1211 09:02:24.096419 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:02:39 crc kubenswrapper[4992]: I1211 09:02:39.095376 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:02:39 crc kubenswrapper[4992]: E1211 09:02:39.096306 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:02:51 crc kubenswrapper[4992]: I1211 09:02:51.094912 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:02:51 crc kubenswrapper[4992]: E1211 09:02:51.095722 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:03:02 crc kubenswrapper[4992]: I1211 09:03:02.095783 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:03:02 crc kubenswrapper[4992]: E1211 09:03:02.096766 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:03:13 crc kubenswrapper[4992]: I1211 09:03:13.094866 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:03:13 crc kubenswrapper[4992]: E1211 09:03:13.095596 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:03:27 crc kubenswrapper[4992]: I1211 09:03:27.095385 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:03:27 crc kubenswrapper[4992]: E1211 09:03:27.096584 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:03:38 crc kubenswrapper[4992]: I1211 09:03:38.095956 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:03:38 crc kubenswrapper[4992]: E1211 09:03:38.097319 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:03:52 crc kubenswrapper[4992]: I1211 09:03:52.095487 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:03:52 crc kubenswrapper[4992]: E1211 09:03:52.096268 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:04:04 crc kubenswrapper[4992]: I1211 09:04:04.095670 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:04:04 crc kubenswrapper[4992]: E1211 09:04:04.096496 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:04:17 crc kubenswrapper[4992]: I1211 09:04:17.095885 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:04:17 crc kubenswrapper[4992]: E1211 09:04:17.096734 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:04:32 crc kubenswrapper[4992]: I1211 09:04:32.095226 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:04:32 crc kubenswrapper[4992]: E1211 09:04:32.096009 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:04:44 crc kubenswrapper[4992]: I1211 09:04:44.095136 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:04:44 crc kubenswrapper[4992]: E1211 09:04:44.096023 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:04:55 crc kubenswrapper[4992]: I1211 09:04:55.094934 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:04:55 crc kubenswrapper[4992]: E1211 09:04:55.095728 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:05:08 crc kubenswrapper[4992]: I1211 09:05:08.095465 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:05:08 crc kubenswrapper[4992]: E1211 09:05:08.096419 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:05:22 crc kubenswrapper[4992]: I1211 09:05:22.095674 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:05:22 crc kubenswrapper[4992]: E1211 09:05:22.096375 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:05:33 crc kubenswrapper[4992]: I1211 09:05:33.095525 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:05:33 crc kubenswrapper[4992]: E1211 09:05:33.096271 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:05:36 crc kubenswrapper[4992]: I1211 09:05:36.525131 4992 generic.go:334] "Generic (PLEG): container finished" podID="c8aeb03b-f704-4b27-8eb5-afeac15bcd18" containerID="8dac81a76e7a723579b50b6bf2116ff9223770c4b4ff078cbf50fa12c0784b47" exitCode=0 Dec 11 09:05:36 crc kubenswrapper[4992]: I1211 09:05:36.525214 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" event={"ID":"c8aeb03b-f704-4b27-8eb5-afeac15bcd18","Type":"ContainerDied","Data":"8dac81a76e7a723579b50b6bf2116ff9223770c4b4ff078cbf50fa12c0784b47"} Dec 11 09:05:37 crc kubenswrapper[4992]: I1211 09:05:37.924609 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.041921 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-secret-0\") pod \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.042035 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww8l6\" (UniqueName: \"kubernetes.io/projected/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-kube-api-access-ww8l6\") pod \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.042095 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-ssh-key\") pod \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.042163 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-inventory\") pod \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.042270 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-combined-ca-bundle\") pod \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\" (UID: \"c8aeb03b-f704-4b27-8eb5-afeac15bcd18\") " Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.052932 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c8aeb03b-f704-4b27-8eb5-afeac15bcd18" (UID: "c8aeb03b-f704-4b27-8eb5-afeac15bcd18"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.053918 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-kube-api-access-ww8l6" (OuterVolumeSpecName: "kube-api-access-ww8l6") pod "c8aeb03b-f704-4b27-8eb5-afeac15bcd18" (UID: "c8aeb03b-f704-4b27-8eb5-afeac15bcd18"). InnerVolumeSpecName "kube-api-access-ww8l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.081540 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-inventory" (OuterVolumeSpecName: "inventory") pod "c8aeb03b-f704-4b27-8eb5-afeac15bcd18" (UID: "c8aeb03b-f704-4b27-8eb5-afeac15bcd18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.087479 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c8aeb03b-f704-4b27-8eb5-afeac15bcd18" (UID: "c8aeb03b-f704-4b27-8eb5-afeac15bcd18"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.104225 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c8aeb03b-f704-4b27-8eb5-afeac15bcd18" (UID: "c8aeb03b-f704-4b27-8eb5-afeac15bcd18"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.144686 4992 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.144722 4992 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.144732 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww8l6\" (UniqueName: \"kubernetes.io/projected/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-kube-api-access-ww8l6\") on node \"crc\" DevicePath \"\"" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.144741 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.144752 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8aeb03b-f704-4b27-8eb5-afeac15bcd18-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.547855 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" event={"ID":"c8aeb03b-f704-4b27-8eb5-afeac15bcd18","Type":"ContainerDied","Data":"02a636c112dd3b11aac56c897ad4cbb7b9adf52089ee0de1f2aafc43a2309989"} Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.548211 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a636c112dd3b11aac56c897ad4cbb7b9adf52089ee0de1f2aafc43a2309989" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.547975 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-r5lln" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.680808 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls"] Dec 11 09:05:38 crc kubenswrapper[4992]: E1211 09:05:38.681297 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8aeb03b-f704-4b27-8eb5-afeac15bcd18" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.681318 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8aeb03b-f704-4b27-8eb5-afeac15bcd18" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 11 09:05:38 crc kubenswrapper[4992]: E1211 09:05:38.681334 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a57360-6e70-4adc-806f-47adda947e64" containerName="extract-utilities" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.681343 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a57360-6e70-4adc-806f-47adda947e64" containerName="extract-utilities" Dec 11 09:05:38 crc kubenswrapper[4992]: E1211 09:05:38.681354 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a57360-6e70-4adc-806f-47adda947e64" containerName="extract-content" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.681361 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a57360-6e70-4adc-806f-47adda947e64" containerName="extract-content" Dec 11 09:05:38 crc kubenswrapper[4992]: E1211 09:05:38.681373 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a57360-6e70-4adc-806f-47adda947e64" containerName="registry-server" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.681380 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a57360-6e70-4adc-806f-47adda947e64" containerName="registry-server" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.681576 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8aeb03b-f704-4b27-8eb5-afeac15bcd18" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.681598 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a57360-6e70-4adc-806f-47adda947e64" containerName="registry-server" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.682328 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.687157 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.687744 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.688076 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.688493 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.688885 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.688902 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.689552 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.703341 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls"] Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.861868 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.861935 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.861983 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.862017 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.862168 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.862335 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.862455 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9w2s\" (UniqueName: \"kubernetes.io/projected/b236958b-e08b-46ec-9e79-772bcb3d6d14-kube-api-access-p9w2s\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.862564 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.862678 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.964484 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.964590 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9w2s\" (UniqueName: \"kubernetes.io/projected/b236958b-e08b-46ec-9e79-772bcb3d6d14-kube-api-access-p9w2s\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.964669 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.964714 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.964872 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.964917 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.964967 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.965011 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.965040 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.966446 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.970948 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.971171 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.972116 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.973182 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.974945 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.975834 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.986299 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:38 crc kubenswrapper[4992]: I1211 09:05:38.995882 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9w2s\" (UniqueName: \"kubernetes.io/projected/b236958b-e08b-46ec-9e79-772bcb3d6d14-kube-api-access-p9w2s\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gnrls\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:39 crc kubenswrapper[4992]: I1211 09:05:39.000342 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:05:39 crc kubenswrapper[4992]: I1211 09:05:39.529429 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls"] Dec 11 09:05:39 crc kubenswrapper[4992]: I1211 09:05:39.556954 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" event={"ID":"b236958b-e08b-46ec-9e79-772bcb3d6d14","Type":"ContainerStarted","Data":"01a5c27f72bc36eeae9937f12e4fb3e57a4ffbbebd2e926dbde5a5cdce3b22ca"} Dec 11 09:05:40 crc kubenswrapper[4992]: I1211 09:05:40.569960 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" event={"ID":"b236958b-e08b-46ec-9e79-772bcb3d6d14","Type":"ContainerStarted","Data":"19b190e76b187fb64a588e96d1cccaff835f3d418f107fbc3ac9bdf6d217f0c4"} Dec 11 09:05:40 crc kubenswrapper[4992]: I1211 09:05:40.591855 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" podStartSLOduration=1.976971007 podStartE2EDuration="2.591836764s" podCreationTimestamp="2025-12-11 09:05:38 +0000 UTC" firstStartedPulling="2025-12-11 09:05:39.538893594 +0000 UTC m=+2563.798367520" lastFinishedPulling="2025-12-11 09:05:40.153759341 +0000 UTC m=+2564.413233277" observedRunningTime="2025-12-11 09:05:40.588396531 +0000 UTC m=+2564.847870477" watchObservedRunningTime="2025-12-11 09:05:40.591836764 +0000 UTC m=+2564.851310690" Dec 11 09:05:48 crc kubenswrapper[4992]: I1211 09:05:48.099586 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:05:48 crc kubenswrapper[4992]: E1211 09:05:48.100587 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:05:59 crc kubenswrapper[4992]: I1211 09:05:59.095094 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:05:59 crc kubenswrapper[4992]: E1211 09:05:59.096003 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.547544 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rp4qc"] Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.550370 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.558423 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp4qc"] Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.619042 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-utilities\") pod \"certified-operators-rp4qc\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.619417 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg4sl\" (UniqueName: \"kubernetes.io/projected/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-kube-api-access-fg4sl\") pod \"certified-operators-rp4qc\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.619734 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-catalog-content\") pod \"certified-operators-rp4qc\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.723385 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg4sl\" (UniqueName: \"kubernetes.io/projected/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-kube-api-access-fg4sl\") pod \"certified-operators-rp4qc\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.723548 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-catalog-content\") pod \"certified-operators-rp4qc\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.723727 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-utilities\") pod \"certified-operators-rp4qc\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.724430 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-catalog-content\") pod \"certified-operators-rp4qc\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.724666 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-utilities\") pod \"certified-operators-rp4qc\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.751026 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg4sl\" (UniqueName: \"kubernetes.io/projected/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-kube-api-access-fg4sl\") pod \"certified-operators-rp4qc\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:01 crc kubenswrapper[4992]: I1211 09:06:01.875729 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:02 crc kubenswrapper[4992]: W1211 09:06:02.548197 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode24be544_c87f_4181_b3c6_fbcb9c10c2a9.slice/crio-453a672ac3504a5129c5aba9bae4a8856d92e6bc43565904369af70ed5d0133c WatchSource:0}: Error finding container 453a672ac3504a5129c5aba9bae4a8856d92e6bc43565904369af70ed5d0133c: Status 404 returned error can't find the container with id 453a672ac3504a5129c5aba9bae4a8856d92e6bc43565904369af70ed5d0133c Dec 11 09:06:02 crc kubenswrapper[4992]: I1211 09:06:02.550214 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp4qc"] Dec 11 09:06:02 crc kubenswrapper[4992]: I1211 09:06:02.786718 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4qc" event={"ID":"e24be544-c87f-4181-b3c6-fbcb9c10c2a9","Type":"ContainerStarted","Data":"453a672ac3504a5129c5aba9bae4a8856d92e6bc43565904369af70ed5d0133c"} Dec 11 09:06:03 crc kubenswrapper[4992]: I1211 09:06:03.799268 4992 generic.go:334] "Generic (PLEG): container finished" podID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerID="aff333a8d69fb034400c6926d4ae2ba4c6fe2d893c6dc0b520a5cf5489fc9086" exitCode=0 Dec 11 09:06:03 crc kubenswrapper[4992]: I1211 09:06:03.799311 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4qc" event={"ID":"e24be544-c87f-4181-b3c6-fbcb9c10c2a9","Type":"ContainerDied","Data":"aff333a8d69fb034400c6926d4ae2ba4c6fe2d893c6dc0b520a5cf5489fc9086"} Dec 11 09:06:05 crc kubenswrapper[4992]: I1211 09:06:05.821861 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4qc" event={"ID":"e24be544-c87f-4181-b3c6-fbcb9c10c2a9","Type":"ContainerStarted","Data":"45d04f1f187c857aa59fd2fe308a7ba9cc2de116e4e6f460832eca20e882ea97"} Dec 11 09:06:06 crc kubenswrapper[4992]: I1211 09:06:06.835512 4992 generic.go:334] "Generic (PLEG): container finished" podID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerID="45d04f1f187c857aa59fd2fe308a7ba9cc2de116e4e6f460832eca20e882ea97" exitCode=0 Dec 11 09:06:06 crc kubenswrapper[4992]: I1211 09:06:06.835578 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4qc" event={"ID":"e24be544-c87f-4181-b3c6-fbcb9c10c2a9","Type":"ContainerDied","Data":"45d04f1f187c857aa59fd2fe308a7ba9cc2de116e4e6f460832eca20e882ea97"} Dec 11 09:06:08 crc kubenswrapper[4992]: I1211 09:06:08.855829 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4qc" event={"ID":"e24be544-c87f-4181-b3c6-fbcb9c10c2a9","Type":"ContainerStarted","Data":"839ec6350f268a3e681fd611b53244eb2cf31568048e46228d00cdc198380204"} Dec 11 09:06:10 crc kubenswrapper[4992]: I1211 09:06:10.096381 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:06:10 crc kubenswrapper[4992]: I1211 09:06:10.884014 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"3c32013c9ab0dc82dc452f64e312a8e8636dc59a14ffb6b9183a77e995cdb69b"} Dec 11 09:06:10 crc kubenswrapper[4992]: I1211 09:06:10.918741 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rp4qc" podStartSLOduration=5.910903627 podStartE2EDuration="9.918698626s" podCreationTimestamp="2025-12-11 09:06:01 +0000 UTC" firstStartedPulling="2025-12-11 09:06:03.801891713 +0000 UTC m=+2588.061365639" lastFinishedPulling="2025-12-11 09:06:07.809686712 +0000 UTC m=+2592.069160638" observedRunningTime="2025-12-11 09:06:08.886791527 +0000 UTC m=+2593.146265463" watchObservedRunningTime="2025-12-11 09:06:10.918698626 +0000 UTC m=+2595.178172552" Dec 11 09:06:11 crc kubenswrapper[4992]: I1211 09:06:11.875938 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:11 crc kubenswrapper[4992]: I1211 09:06:11.876355 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:11 crc kubenswrapper[4992]: I1211 09:06:11.927259 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:21 crc kubenswrapper[4992]: I1211 09:06:21.923955 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:21 crc kubenswrapper[4992]: I1211 09:06:21.972789 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rp4qc"] Dec 11 09:06:22 crc kubenswrapper[4992]: I1211 09:06:22.009081 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rp4qc" podUID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerName="registry-server" containerID="cri-o://839ec6350f268a3e681fd611b53244eb2cf31568048e46228d00cdc198380204" gracePeriod=2 Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.020797 4992 generic.go:334] "Generic (PLEG): container finished" podID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerID="839ec6350f268a3e681fd611b53244eb2cf31568048e46228d00cdc198380204" exitCode=0 Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.020888 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4qc" event={"ID":"e24be544-c87f-4181-b3c6-fbcb9c10c2a9","Type":"ContainerDied","Data":"839ec6350f268a3e681fd611b53244eb2cf31568048e46228d00cdc198380204"} Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.274849 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.370807 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-utilities\") pod \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.370911 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-catalog-content\") pod \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.371086 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg4sl\" (UniqueName: \"kubernetes.io/projected/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-kube-api-access-fg4sl\") pod \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\" (UID: \"e24be544-c87f-4181-b3c6-fbcb9c10c2a9\") " Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.371775 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-utilities" (OuterVolumeSpecName: "utilities") pod "e24be544-c87f-4181-b3c6-fbcb9c10c2a9" (UID: "e24be544-c87f-4181-b3c6-fbcb9c10c2a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.376499 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-kube-api-access-fg4sl" (OuterVolumeSpecName: "kube-api-access-fg4sl") pod "e24be544-c87f-4181-b3c6-fbcb9c10c2a9" (UID: "e24be544-c87f-4181-b3c6-fbcb9c10c2a9"). InnerVolumeSpecName "kube-api-access-fg4sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.426884 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e24be544-c87f-4181-b3c6-fbcb9c10c2a9" (UID: "e24be544-c87f-4181-b3c6-fbcb9c10c2a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.473505 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.473531 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:06:23 crc kubenswrapper[4992]: I1211 09:06:23.473541 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg4sl\" (UniqueName: \"kubernetes.io/projected/e24be544-c87f-4181-b3c6-fbcb9c10c2a9-kube-api-access-fg4sl\") on node \"crc\" DevicePath \"\"" Dec 11 09:06:24 crc kubenswrapper[4992]: I1211 09:06:24.034130 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4qc" event={"ID":"e24be544-c87f-4181-b3c6-fbcb9c10c2a9","Type":"ContainerDied","Data":"453a672ac3504a5129c5aba9bae4a8856d92e6bc43565904369af70ed5d0133c"} Dec 11 09:06:24 crc kubenswrapper[4992]: I1211 09:06:24.034170 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4qc" Dec 11 09:06:24 crc kubenswrapper[4992]: I1211 09:06:24.034200 4992 scope.go:117] "RemoveContainer" containerID="839ec6350f268a3e681fd611b53244eb2cf31568048e46228d00cdc198380204" Dec 11 09:06:24 crc kubenswrapper[4992]: I1211 09:06:24.056851 4992 scope.go:117] "RemoveContainer" containerID="45d04f1f187c857aa59fd2fe308a7ba9cc2de116e4e6f460832eca20e882ea97" Dec 11 09:06:24 crc kubenswrapper[4992]: I1211 09:06:24.072644 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rp4qc"] Dec 11 09:06:24 crc kubenswrapper[4992]: I1211 09:06:24.081865 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rp4qc"] Dec 11 09:06:24 crc kubenswrapper[4992]: I1211 09:06:24.088699 4992 scope.go:117] "RemoveContainer" containerID="aff333a8d69fb034400c6926d4ae2ba4c6fe2d893c6dc0b520a5cf5489fc9086" Dec 11 09:06:24 crc kubenswrapper[4992]: I1211 09:06:24.106166 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" path="/var/lib/kubelet/pods/e24be544-c87f-4181-b3c6-fbcb9c10c2a9/volumes" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.649383 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tchwr"] Dec 11 09:07:17 crc kubenswrapper[4992]: E1211 09:07:17.650615 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerName="extract-content" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.650656 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerName="extract-content" Dec 11 09:07:17 crc kubenswrapper[4992]: E1211 09:07:17.650677 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerName="registry-server" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.650686 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerName="registry-server" Dec 11 09:07:17 crc kubenswrapper[4992]: E1211 09:07:17.650736 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerName="extract-utilities" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.650745 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerName="extract-utilities" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.651008 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24be544-c87f-4181-b3c6-fbcb9c10c2a9" containerName="registry-server" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.652997 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.667509 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tchwr"] Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.750151 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-utilities\") pod \"redhat-operators-tchwr\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.750213 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-catalog-content\") pod \"redhat-operators-tchwr\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.750758 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vdg\" (UniqueName: \"kubernetes.io/projected/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-kube-api-access-z8vdg\") pod \"redhat-operators-tchwr\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.852889 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8vdg\" (UniqueName: \"kubernetes.io/projected/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-kube-api-access-z8vdg\") pod \"redhat-operators-tchwr\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.852996 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-utilities\") pod \"redhat-operators-tchwr\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.853050 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-catalog-content\") pod \"redhat-operators-tchwr\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.853540 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-catalog-content\") pod \"redhat-operators-tchwr\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.853697 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-utilities\") pod \"redhat-operators-tchwr\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.875578 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8vdg\" (UniqueName: \"kubernetes.io/projected/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-kube-api-access-z8vdg\") pod \"redhat-operators-tchwr\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:17 crc kubenswrapper[4992]: I1211 09:07:17.997330 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:18 crc kubenswrapper[4992]: I1211 09:07:18.474244 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tchwr"] Dec 11 09:07:18 crc kubenswrapper[4992]: I1211 09:07:18.539355 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchwr" event={"ID":"eda33db2-b2c8-4df6-8593-ce2f0ec0168f","Type":"ContainerStarted","Data":"5e4b816e87ea2dec1ef7028f53eff8b47f11c8573643c50c1d24820d509742ef"} Dec 11 09:07:19 crc kubenswrapper[4992]: I1211 09:07:19.550936 4992 generic.go:334] "Generic (PLEG): container finished" podID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerID="79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7" exitCode=0 Dec 11 09:07:19 crc kubenswrapper[4992]: I1211 09:07:19.550995 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchwr" event={"ID":"eda33db2-b2c8-4df6-8593-ce2f0ec0168f","Type":"ContainerDied","Data":"79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7"} Dec 11 09:07:19 crc kubenswrapper[4992]: I1211 09:07:19.554889 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 09:07:22 crc kubenswrapper[4992]: I1211 09:07:22.605963 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchwr" event={"ID":"eda33db2-b2c8-4df6-8593-ce2f0ec0168f","Type":"ContainerStarted","Data":"4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa"} Dec 11 09:07:30 crc kubenswrapper[4992]: I1211 09:07:30.695965 4992 generic.go:334] "Generic (PLEG): container finished" podID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerID="4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa" exitCode=0 Dec 11 09:07:30 crc kubenswrapper[4992]: I1211 09:07:30.696207 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchwr" event={"ID":"eda33db2-b2c8-4df6-8593-ce2f0ec0168f","Type":"ContainerDied","Data":"4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa"} Dec 11 09:07:34 crc kubenswrapper[4992]: I1211 09:07:34.735380 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchwr" event={"ID":"eda33db2-b2c8-4df6-8593-ce2f0ec0168f","Type":"ContainerStarted","Data":"c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae"} Dec 11 09:07:34 crc kubenswrapper[4992]: I1211 09:07:34.762329 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tchwr" podStartSLOduration=3.459147327 podStartE2EDuration="17.762303856s" podCreationTimestamp="2025-12-11 09:07:17 +0000 UTC" firstStartedPulling="2025-12-11 09:07:19.55455821 +0000 UTC m=+2663.814032136" lastFinishedPulling="2025-12-11 09:07:33.857714739 +0000 UTC m=+2678.117188665" observedRunningTime="2025-12-11 09:07:34.752804716 +0000 UTC m=+2679.012278652" watchObservedRunningTime="2025-12-11 09:07:34.762303856 +0000 UTC m=+2679.021777792" Dec 11 09:07:37 crc kubenswrapper[4992]: I1211 09:07:37.997622 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:37 crc kubenswrapper[4992]: I1211 09:07:37.998213 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:39 crc kubenswrapper[4992]: I1211 09:07:39.045373 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tchwr" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerName="registry-server" probeResult="failure" output=< Dec 11 09:07:39 crc kubenswrapper[4992]: timeout: failed to connect service ":50051" within 1s Dec 11 09:07:39 crc kubenswrapper[4992]: > Dec 11 09:07:48 crc kubenswrapper[4992]: I1211 09:07:48.040768 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:48 crc kubenswrapper[4992]: I1211 09:07:48.090833 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:48 crc kubenswrapper[4992]: I1211 09:07:48.777768 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tchwr"] Dec 11 09:07:49 crc kubenswrapper[4992]: I1211 09:07:49.869336 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tchwr" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerName="registry-server" containerID="cri-o://c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae" gracePeriod=2 Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.348933 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.506933 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8vdg\" (UniqueName: \"kubernetes.io/projected/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-kube-api-access-z8vdg\") pod \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.507085 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-catalog-content\") pod \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.507241 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-utilities\") pod \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\" (UID: \"eda33db2-b2c8-4df6-8593-ce2f0ec0168f\") " Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.508204 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-utilities" (OuterVolumeSpecName: "utilities") pod "eda33db2-b2c8-4df6-8593-ce2f0ec0168f" (UID: "eda33db2-b2c8-4df6-8593-ce2f0ec0168f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.513751 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-kube-api-access-z8vdg" (OuterVolumeSpecName: "kube-api-access-z8vdg") pod "eda33db2-b2c8-4df6-8593-ce2f0ec0168f" (UID: "eda33db2-b2c8-4df6-8593-ce2f0ec0168f"). InnerVolumeSpecName "kube-api-access-z8vdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.609856 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.609910 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8vdg\" (UniqueName: \"kubernetes.io/projected/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-kube-api-access-z8vdg\") on node \"crc\" DevicePath \"\"" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.617721 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eda33db2-b2c8-4df6-8593-ce2f0ec0168f" (UID: "eda33db2-b2c8-4df6-8593-ce2f0ec0168f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.712298 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda33db2-b2c8-4df6-8593-ce2f0ec0168f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.880193 4992 generic.go:334] "Generic (PLEG): container finished" podID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerID="c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae" exitCode=0 Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.880242 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchwr" event={"ID":"eda33db2-b2c8-4df6-8593-ce2f0ec0168f","Type":"ContainerDied","Data":"c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae"} Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.880275 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tchwr" event={"ID":"eda33db2-b2c8-4df6-8593-ce2f0ec0168f","Type":"ContainerDied","Data":"5e4b816e87ea2dec1ef7028f53eff8b47f11c8573643c50c1d24820d509742ef"} Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.880294 4992 scope.go:117] "RemoveContainer" containerID="c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.880298 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tchwr" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.904503 4992 scope.go:117] "RemoveContainer" containerID="4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.921746 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tchwr"] Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.940945 4992 scope.go:117] "RemoveContainer" containerID="79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7" Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.943434 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tchwr"] Dec 11 09:07:50 crc kubenswrapper[4992]: I1211 09:07:50.999920 4992 scope.go:117] "RemoveContainer" containerID="c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae" Dec 11 09:07:51 crc kubenswrapper[4992]: E1211 09:07:51.000440 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae\": container with ID starting with c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae not found: ID does not exist" containerID="c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae" Dec 11 09:07:51 crc kubenswrapper[4992]: I1211 09:07:51.000493 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae"} err="failed to get container status \"c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae\": rpc error: code = NotFound desc = could not find container \"c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae\": container with ID starting with c88dd0ee2b32afd11ce95d9613a2ab280bf128333b2c734929f7c86d329af9ae not found: ID does not exist" Dec 11 09:07:51 crc kubenswrapper[4992]: I1211 09:07:51.000525 4992 scope.go:117] "RemoveContainer" containerID="4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa" Dec 11 09:07:51 crc kubenswrapper[4992]: E1211 09:07:51.001021 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa\": container with ID starting with 4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa not found: ID does not exist" containerID="4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa" Dec 11 09:07:51 crc kubenswrapper[4992]: I1211 09:07:51.001055 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa"} err="failed to get container status \"4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa\": rpc error: code = NotFound desc = could not find container \"4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa\": container with ID starting with 4c839f69171051009b478d28296fe5a41d4d1647b6ff6bc62865d0fdafb4e7fa not found: ID does not exist" Dec 11 09:07:51 crc kubenswrapper[4992]: I1211 09:07:51.001085 4992 scope.go:117] "RemoveContainer" containerID="79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7" Dec 11 09:07:51 crc kubenswrapper[4992]: E1211 09:07:51.001340 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7\": container with ID starting with 79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7 not found: ID does not exist" containerID="79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7" Dec 11 09:07:51 crc kubenswrapper[4992]: I1211 09:07:51.001368 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7"} err="failed to get container status \"79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7\": rpc error: code = NotFound desc = could not find container \"79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7\": container with ID starting with 79d0ed12b1da8ca72b99f973b40b9a83fe829cec610af8a79bce544619fd9ea7 not found: ID does not exist" Dec 11 09:07:52 crc kubenswrapper[4992]: I1211 09:07:52.113427 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" path="/var/lib/kubelet/pods/eda33db2-b2c8-4df6-8593-ce2f0ec0168f/volumes" Dec 11 09:08:29 crc kubenswrapper[4992]: I1211 09:08:29.232619 4992 generic.go:334] "Generic (PLEG): container finished" podID="b236958b-e08b-46ec-9e79-772bcb3d6d14" containerID="19b190e76b187fb64a588e96d1cccaff835f3d418f107fbc3ac9bdf6d217f0c4" exitCode=0 Dec 11 09:08:29 crc kubenswrapper[4992]: I1211 09:08:29.232709 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" event={"ID":"b236958b-e08b-46ec-9e79-772bcb3d6d14","Type":"ContainerDied","Data":"19b190e76b187fb64a588e96d1cccaff835f3d418f107fbc3ac9bdf6d217f0c4"} Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.674035 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.783738 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-0\") pod \"b236958b-e08b-46ec-9e79-772bcb3d6d14\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.783853 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-0\") pod \"b236958b-e08b-46ec-9e79-772bcb3d6d14\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.783897 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-inventory\") pod \"b236958b-e08b-46ec-9e79-772bcb3d6d14\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.783977 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-ssh-key\") pod \"b236958b-e08b-46ec-9e79-772bcb3d6d14\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.784052 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-1\") pod \"b236958b-e08b-46ec-9e79-772bcb3d6d14\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.784199 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-combined-ca-bundle\") pod \"b236958b-e08b-46ec-9e79-772bcb3d6d14\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.784785 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9w2s\" (UniqueName: \"kubernetes.io/projected/b236958b-e08b-46ec-9e79-772bcb3d6d14-kube-api-access-p9w2s\") pod \"b236958b-e08b-46ec-9e79-772bcb3d6d14\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.784833 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-1\") pod \"b236958b-e08b-46ec-9e79-772bcb3d6d14\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.784916 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-extra-config-0\") pod \"b236958b-e08b-46ec-9e79-772bcb3d6d14\" (UID: \"b236958b-e08b-46ec-9e79-772bcb3d6d14\") " Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.790190 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b236958b-e08b-46ec-9e79-772bcb3d6d14" (UID: "b236958b-e08b-46ec-9e79-772bcb3d6d14"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.795463 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b236958b-e08b-46ec-9e79-772bcb3d6d14-kube-api-access-p9w2s" (OuterVolumeSpecName: "kube-api-access-p9w2s") pod "b236958b-e08b-46ec-9e79-772bcb3d6d14" (UID: "b236958b-e08b-46ec-9e79-772bcb3d6d14"). InnerVolumeSpecName "kube-api-access-p9w2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.817255 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-inventory" (OuterVolumeSpecName: "inventory") pod "b236958b-e08b-46ec-9e79-772bcb3d6d14" (UID: "b236958b-e08b-46ec-9e79-772bcb3d6d14"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.820374 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b236958b-e08b-46ec-9e79-772bcb3d6d14" (UID: "b236958b-e08b-46ec-9e79-772bcb3d6d14"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.820591 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b236958b-e08b-46ec-9e79-772bcb3d6d14" (UID: "b236958b-e08b-46ec-9e79-772bcb3d6d14"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.821515 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b236958b-e08b-46ec-9e79-772bcb3d6d14" (UID: "b236958b-e08b-46ec-9e79-772bcb3d6d14"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.824655 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b236958b-e08b-46ec-9e79-772bcb3d6d14" (UID: "b236958b-e08b-46ec-9e79-772bcb3d6d14"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.825901 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b236958b-e08b-46ec-9e79-772bcb3d6d14" (UID: "b236958b-e08b-46ec-9e79-772bcb3d6d14"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.830725 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b236958b-e08b-46ec-9e79-772bcb3d6d14" (UID: "b236958b-e08b-46ec-9e79-772bcb3d6d14"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.887319 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.887362 4992 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.887377 4992 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.887389 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9w2s\" (UniqueName: \"kubernetes.io/projected/b236958b-e08b-46ec-9e79-772bcb3d6d14-kube-api-access-p9w2s\") on node \"crc\" DevicePath \"\"" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.887403 4992 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.887416 4992 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.887427 4992 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.887438 4992 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 11 09:08:30 crc kubenswrapper[4992]: I1211 09:08:30.887452 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b236958b-e08b-46ec-9e79-772bcb3d6d14-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.252309 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" event={"ID":"b236958b-e08b-46ec-9e79-772bcb3d6d14","Type":"ContainerDied","Data":"01a5c27f72bc36eeae9937f12e4fb3e57a4ffbbebd2e926dbde5a5cdce3b22ca"} Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.252355 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a5c27f72bc36eeae9937f12e4fb3e57a4ffbbebd2e926dbde5a5cdce3b22ca" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.252369 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gnrls" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.369575 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m"] Dec 11 09:08:31 crc kubenswrapper[4992]: E1211 09:08:31.370028 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerName="extract-content" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.370048 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerName="extract-content" Dec 11 09:08:31 crc kubenswrapper[4992]: E1211 09:08:31.370069 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b236958b-e08b-46ec-9e79-772bcb3d6d14" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.370076 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="b236958b-e08b-46ec-9e79-772bcb3d6d14" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 11 09:08:31 crc kubenswrapper[4992]: E1211 09:08:31.370105 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerName="registry-server" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.370111 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerName="registry-server" Dec 11 09:08:31 crc kubenswrapper[4992]: E1211 09:08:31.370121 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerName="extract-utilities" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.370128 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerName="extract-utilities" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.370356 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda33db2-b2c8-4df6-8593-ce2f0ec0168f" containerName="registry-server" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.370375 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="b236958b-e08b-46ec-9e79-772bcb3d6d14" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.371166 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.374537 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.376197 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.376534 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jl2" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.376863 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.377280 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.394425 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m"] Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.397859 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.397946 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.398035 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.398104 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.398261 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.398317 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjqwl\" (UniqueName: \"kubernetes.io/projected/35d02426-6179-4c35-8e14-f1e06f6684f6-kube-api-access-gjqwl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.398356 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.499754 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.500750 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjqwl\" (UniqueName: \"kubernetes.io/projected/35d02426-6179-4c35-8e14-f1e06f6684f6-kube-api-access-gjqwl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.500923 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.501161 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.501292 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.501428 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.501545 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.504927 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.506217 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.506689 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.507777 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.507831 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.509251 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.522310 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjqwl\" (UniqueName: \"kubernetes.io/projected/35d02426-6179-4c35-8e14-f1e06f6684f6-kube-api-access-gjqwl\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:31 crc kubenswrapper[4992]: I1211 09:08:31.692138 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:08:32 crc kubenswrapper[4992]: I1211 09:08:32.283622 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m"] Dec 11 09:08:33 crc kubenswrapper[4992]: I1211 09:08:33.273689 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" event={"ID":"35d02426-6179-4c35-8e14-f1e06f6684f6","Type":"ContainerStarted","Data":"4271e82711d36f37aca9db6c1c0685d0b7a9b317d10e766f8c98ecebe5385a33"} Dec 11 09:08:34 crc kubenswrapper[4992]: I1211 09:08:34.283507 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" event={"ID":"35d02426-6179-4c35-8e14-f1e06f6684f6","Type":"ContainerStarted","Data":"93418f3a4e10f1c96eae8f7a8b5d613be0c5b1b77036228c8585a1a1b6160ad6"} Dec 11 09:08:34 crc kubenswrapper[4992]: I1211 09:08:34.304352 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" podStartSLOduration=2.063084418 podStartE2EDuration="3.30432656s" podCreationTimestamp="2025-12-11 09:08:31 +0000 UTC" firstStartedPulling="2025-12-11 09:08:32.290233064 +0000 UTC m=+2736.549706990" lastFinishedPulling="2025-12-11 09:08:33.531475206 +0000 UTC m=+2737.790949132" observedRunningTime="2025-12-11 09:08:34.299695888 +0000 UTC m=+2738.559169824" watchObservedRunningTime="2025-12-11 09:08:34.30432656 +0000 UTC m=+2738.563800506" Dec 11 09:08:35 crc kubenswrapper[4992]: I1211 09:08:35.378281 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:08:35 crc kubenswrapper[4992]: I1211 09:08:35.378691 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:09:05 crc kubenswrapper[4992]: I1211 09:09:05.378543 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:09:05 crc kubenswrapper[4992]: I1211 09:09:05.379423 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:09:35 crc kubenswrapper[4992]: I1211 09:09:35.378558 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:09:35 crc kubenswrapper[4992]: I1211 09:09:35.379152 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:09:35 crc kubenswrapper[4992]: I1211 09:09:35.379198 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 09:09:35 crc kubenswrapper[4992]: I1211 09:09:35.379942 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c32013c9ab0dc82dc452f64e312a8e8636dc59a14ffb6b9183a77e995cdb69b"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 09:09:35 crc kubenswrapper[4992]: I1211 09:09:35.380004 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://3c32013c9ab0dc82dc452f64e312a8e8636dc59a14ffb6b9183a77e995cdb69b" gracePeriod=600 Dec 11 09:09:35 crc kubenswrapper[4992]: I1211 09:09:35.849262 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="3c32013c9ab0dc82dc452f64e312a8e8636dc59a14ffb6b9183a77e995cdb69b" exitCode=0 Dec 11 09:09:35 crc kubenswrapper[4992]: I1211 09:09:35.849310 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"3c32013c9ab0dc82dc452f64e312a8e8636dc59a14ffb6b9183a77e995cdb69b"} Dec 11 09:09:35 crc kubenswrapper[4992]: I1211 09:09:35.849343 4992 scope.go:117] "RemoveContainer" containerID="6bcd0331c4b2add22860b31979c4208f3a3b657580771f00130a43f00c2e583f" Dec 11 09:09:36 crc kubenswrapper[4992]: I1211 09:09:36.861536 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724"} Dec 11 09:11:14 crc kubenswrapper[4992]: I1211 09:11:14.703167 4992 generic.go:334] "Generic (PLEG): container finished" podID="35d02426-6179-4c35-8e14-f1e06f6684f6" containerID="93418f3a4e10f1c96eae8f7a8b5d613be0c5b1b77036228c8585a1a1b6160ad6" exitCode=0 Dec 11 09:11:14 crc kubenswrapper[4992]: I1211 09:11:14.703291 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" event={"ID":"35d02426-6179-4c35-8e14-f1e06f6684f6","Type":"ContainerDied","Data":"93418f3a4e10f1c96eae8f7a8b5d613be0c5b1b77036228c8585a1a1b6160ad6"} Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.185447 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.364518 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-1\") pod \"35d02426-6179-4c35-8e14-f1e06f6684f6\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.364661 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-telemetry-combined-ca-bundle\") pod \"35d02426-6179-4c35-8e14-f1e06f6684f6\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.364698 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-2\") pod \"35d02426-6179-4c35-8e14-f1e06f6684f6\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.364746 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-inventory\") pod \"35d02426-6179-4c35-8e14-f1e06f6684f6\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.364846 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-0\") pod \"35d02426-6179-4c35-8e14-f1e06f6684f6\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.364900 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjqwl\" (UniqueName: \"kubernetes.io/projected/35d02426-6179-4c35-8e14-f1e06f6684f6-kube-api-access-gjqwl\") pod \"35d02426-6179-4c35-8e14-f1e06f6684f6\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.365006 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ssh-key\") pod \"35d02426-6179-4c35-8e14-f1e06f6684f6\" (UID: \"35d02426-6179-4c35-8e14-f1e06f6684f6\") " Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.373501 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d02426-6179-4c35-8e14-f1e06f6684f6-kube-api-access-gjqwl" (OuterVolumeSpecName: "kube-api-access-gjqwl") pod "35d02426-6179-4c35-8e14-f1e06f6684f6" (UID: "35d02426-6179-4c35-8e14-f1e06f6684f6"). InnerVolumeSpecName "kube-api-access-gjqwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.373500 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "35d02426-6179-4c35-8e14-f1e06f6684f6" (UID: "35d02426-6179-4c35-8e14-f1e06f6684f6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.394517 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "35d02426-6179-4c35-8e14-f1e06f6684f6" (UID: "35d02426-6179-4c35-8e14-f1e06f6684f6"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.397356 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "35d02426-6179-4c35-8e14-f1e06f6684f6" (UID: "35d02426-6179-4c35-8e14-f1e06f6684f6"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.397460 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "35d02426-6179-4c35-8e14-f1e06f6684f6" (UID: "35d02426-6179-4c35-8e14-f1e06f6684f6"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.399366 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "35d02426-6179-4c35-8e14-f1e06f6684f6" (UID: "35d02426-6179-4c35-8e14-f1e06f6684f6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.400942 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-inventory" (OuterVolumeSpecName: "inventory") pod "35d02426-6179-4c35-8e14-f1e06f6684f6" (UID: "35d02426-6179-4c35-8e14-f1e06f6684f6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.468312 4992 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.468338 4992 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.468349 4992 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.468362 4992 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.468372 4992 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.468383 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjqwl\" (UniqueName: \"kubernetes.io/projected/35d02426-6179-4c35-8e14-f1e06f6684f6-kube-api-access-gjqwl\") on node \"crc\" DevicePath \"\"" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.468392 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35d02426-6179-4c35-8e14-f1e06f6684f6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.725391 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" event={"ID":"35d02426-6179-4c35-8e14-f1e06f6684f6","Type":"ContainerDied","Data":"4271e82711d36f37aca9db6c1c0685d0b7a9b317d10e766f8c98ecebe5385a33"} Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.725664 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4271e82711d36f37aca9db6c1c0685d0b7a9b317d10e766f8c98ecebe5385a33" Dec 11 09:11:16 crc kubenswrapper[4992]: I1211 09:11:16.725561 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m" Dec 11 09:12:05 crc kubenswrapper[4992]: I1211 09:12:05.379089 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:12:05 crc kubenswrapper[4992]: I1211 09:12:05.379617 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.646962 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 11 09:12:11 crc kubenswrapper[4992]: E1211 09:12:11.648374 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35d02426-6179-4c35-8e14-f1e06f6684f6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.648403 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d02426-6179-4c35-8e14-f1e06f6684f6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.648884 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="35d02426-6179-4c35-8e14-f1e06f6684f6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.650156 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.652494 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.652521 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.652662 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.652856 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-t4bvx" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.657179 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.749367 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.749431 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.749465 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.749490 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79dtx\" (UniqueName: \"kubernetes.io/projected/79d2a033-d073-439d-8d2c-779b95da30f4-kube-api-access-79dtx\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.749564 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-config-data\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.749595 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.749688 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.749716 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.749742 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.851559 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.851661 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.851686 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.851714 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.851739 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.851769 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.851796 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.851818 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79dtx\" (UniqueName: \"kubernetes.io/projected/79d2a033-d073-439d-8d2c-779b95da30f4-kube-api-access-79dtx\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.851849 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-config-data\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.852406 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.852532 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.852660 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.853154 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.853757 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-config-data\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.857700 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.858152 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.858417 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.871543 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79dtx\" (UniqueName: \"kubernetes.io/projected/79d2a033-d073-439d-8d2c-779b95da30f4-kube-api-access-79dtx\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.880307 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " pod="openstack/tempest-tests-tempest" Dec 11 09:12:11 crc kubenswrapper[4992]: I1211 09:12:11.981388 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 09:12:12 crc kubenswrapper[4992]: I1211 09:12:12.417716 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 11 09:12:13 crc kubenswrapper[4992]: I1211 09:12:13.237886 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"79d2a033-d073-439d-8d2c-779b95da30f4","Type":"ContainerStarted","Data":"7eb765eec9e345ab2d697ffa2064f2c729255a737dfde2ab7adde2f4c41ab42c"} Dec 11 09:12:35 crc kubenswrapper[4992]: I1211 09:12:35.378826 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:12:35 crc kubenswrapper[4992]: I1211 09:12:35.379418 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:12:44 crc kubenswrapper[4992]: E1211 09:12:44.029758 4992 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 11 09:12:44 crc kubenswrapper[4992]: E1211 09:12:44.030400 4992 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79dtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(79d2a033-d073-439d-8d2c-779b95da30f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 09:12:44 crc kubenswrapper[4992]: E1211 09:12:44.031518 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="79d2a033-d073-439d-8d2c-779b95da30f4" Dec 11 09:12:44 crc kubenswrapper[4992]: E1211 09:12:44.533429 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="79d2a033-d073-439d-8d2c-779b95da30f4" Dec 11 09:12:58 crc kubenswrapper[4992]: I1211 09:12:58.098106 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 09:12:59 crc kubenswrapper[4992]: I1211 09:12:59.174909 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 11 09:13:00 crc kubenswrapper[4992]: I1211 09:13:00.667239 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"79d2a033-d073-439d-8d2c-779b95da30f4","Type":"ContainerStarted","Data":"e7ee74ec580574d9e66d2ab171ef9f7607c3ea202bfdebaa5c4cf3e4a8b7331c"} Dec 11 09:13:00 crc kubenswrapper[4992]: I1211 09:13:00.692614 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.9404369150000003 podStartE2EDuration="50.692595292s" podCreationTimestamp="2025-12-11 09:12:10 +0000 UTC" firstStartedPulling="2025-12-11 09:12:12.419961379 +0000 UTC m=+2956.679435305" lastFinishedPulling="2025-12-11 09:12:59.172119756 +0000 UTC m=+3003.431593682" observedRunningTime="2025-12-11 09:13:00.681678007 +0000 UTC m=+3004.941151953" watchObservedRunningTime="2025-12-11 09:13:00.692595292 +0000 UTC m=+3004.952069218" Dec 11 09:13:05 crc kubenswrapper[4992]: I1211 09:13:05.378596 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:13:05 crc kubenswrapper[4992]: I1211 09:13:05.379132 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:13:05 crc kubenswrapper[4992]: I1211 09:13:05.379166 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 09:13:05 crc kubenswrapper[4992]: I1211 09:13:05.379983 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 09:13:05 crc kubenswrapper[4992]: I1211 09:13:05.380030 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" gracePeriod=600 Dec 11 09:13:06 crc kubenswrapper[4992]: E1211 09:13:06.070517 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:13:06 crc kubenswrapper[4992]: I1211 09:13:06.720536 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" exitCode=0 Dec 11 09:13:06 crc kubenswrapper[4992]: I1211 09:13:06.720600 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724"} Dec 11 09:13:06 crc kubenswrapper[4992]: I1211 09:13:06.720929 4992 scope.go:117] "RemoveContainer" containerID="3c32013c9ab0dc82dc452f64e312a8e8636dc59a14ffb6b9183a77e995cdb69b" Dec 11 09:13:06 crc kubenswrapper[4992]: I1211 09:13:06.721758 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:13:06 crc kubenswrapper[4992]: E1211 09:13:06.722251 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:13:20 crc kubenswrapper[4992]: I1211 09:13:20.095837 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:13:20 crc kubenswrapper[4992]: E1211 09:13:20.096674 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.786896 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nrnzq"] Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.789359 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.793331 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-catalog-content\") pod \"community-operators-nrnzq\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.793401 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-utilities\") pod \"community-operators-nrnzq\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.793448 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvk92\" (UniqueName: \"kubernetes.io/projected/0665207d-e3ae-452f-9b49-bbccce7024c9-kube-api-access-zvk92\") pod \"community-operators-nrnzq\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.809882 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrnzq"] Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.895772 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-catalog-content\") pod \"community-operators-nrnzq\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.895841 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-utilities\") pod \"community-operators-nrnzq\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.895876 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvk92\" (UniqueName: \"kubernetes.io/projected/0665207d-e3ae-452f-9b49-bbccce7024c9-kube-api-access-zvk92\") pod \"community-operators-nrnzq\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.896276 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-catalog-content\") pod \"community-operators-nrnzq\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.896330 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-utilities\") pod \"community-operators-nrnzq\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:24 crc kubenswrapper[4992]: I1211 09:13:24.923955 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvk92\" (UniqueName: \"kubernetes.io/projected/0665207d-e3ae-452f-9b49-bbccce7024c9-kube-api-access-zvk92\") pod \"community-operators-nrnzq\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:25 crc kubenswrapper[4992]: I1211 09:13:25.114893 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:25 crc kubenswrapper[4992]: I1211 09:13:25.662094 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nrnzq"] Dec 11 09:13:25 crc kubenswrapper[4992]: I1211 09:13:25.895415 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrnzq" event={"ID":"0665207d-e3ae-452f-9b49-bbccce7024c9","Type":"ContainerStarted","Data":"53ac797a06f813e355baeec9ba9c97ed43dfc35f304b840f2845048043a54747"} Dec 11 09:13:26 crc kubenswrapper[4992]: I1211 09:13:26.909731 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrnzq" event={"ID":"0665207d-e3ae-452f-9b49-bbccce7024c9","Type":"ContainerStarted","Data":"1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671"} Dec 11 09:13:27 crc kubenswrapper[4992]: I1211 09:13:27.919525 4992 generic.go:334] "Generic (PLEG): container finished" podID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerID="1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671" exitCode=0 Dec 11 09:13:27 crc kubenswrapper[4992]: I1211 09:13:27.919676 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrnzq" event={"ID":"0665207d-e3ae-452f-9b49-bbccce7024c9","Type":"ContainerDied","Data":"1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671"} Dec 11 09:13:34 crc kubenswrapper[4992]: I1211 09:13:34.982480 4992 generic.go:334] "Generic (PLEG): container finished" podID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerID="e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe" exitCode=0 Dec 11 09:13:34 crc kubenswrapper[4992]: I1211 09:13:34.983204 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrnzq" event={"ID":"0665207d-e3ae-452f-9b49-bbccce7024c9","Type":"ContainerDied","Data":"e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe"} Dec 11 09:13:35 crc kubenswrapper[4992]: I1211 09:13:35.096410 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:13:35 crc kubenswrapper[4992]: E1211 09:13:35.096907 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:13:35 crc kubenswrapper[4992]: I1211 09:13:35.995595 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrnzq" event={"ID":"0665207d-e3ae-452f-9b49-bbccce7024c9","Type":"ContainerStarted","Data":"692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a"} Dec 11 09:13:36 crc kubenswrapper[4992]: I1211 09:13:36.016706 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nrnzq" podStartSLOduration=5.495245479 podStartE2EDuration="12.016682948s" podCreationTimestamp="2025-12-11 09:13:24 +0000 UTC" firstStartedPulling="2025-12-11 09:13:28.930660279 +0000 UTC m=+3033.190134215" lastFinishedPulling="2025-12-11 09:13:35.452097758 +0000 UTC m=+3039.711571684" observedRunningTime="2025-12-11 09:13:36.013281915 +0000 UTC m=+3040.272755861" watchObservedRunningTime="2025-12-11 09:13:36.016682948 +0000 UTC m=+3040.276156884" Dec 11 09:13:45 crc kubenswrapper[4992]: I1211 09:13:45.114992 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:45 crc kubenswrapper[4992]: I1211 09:13:45.115564 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:45 crc kubenswrapper[4992]: I1211 09:13:45.160244 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:46 crc kubenswrapper[4992]: I1211 09:13:46.128890 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:46 crc kubenswrapper[4992]: I1211 09:13:46.177625 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrnzq"] Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.094728 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nrnzq" podUID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerName="registry-server" containerID="cri-o://692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a" gracePeriod=2 Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.565409 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.677464 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvk92\" (UniqueName: \"kubernetes.io/projected/0665207d-e3ae-452f-9b49-bbccce7024c9-kube-api-access-zvk92\") pod \"0665207d-e3ae-452f-9b49-bbccce7024c9\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.677813 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-catalog-content\") pod \"0665207d-e3ae-452f-9b49-bbccce7024c9\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.677963 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-utilities\") pod \"0665207d-e3ae-452f-9b49-bbccce7024c9\" (UID: \"0665207d-e3ae-452f-9b49-bbccce7024c9\") " Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.678822 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-utilities" (OuterVolumeSpecName: "utilities") pod "0665207d-e3ae-452f-9b49-bbccce7024c9" (UID: "0665207d-e3ae-452f-9b49-bbccce7024c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.685957 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0665207d-e3ae-452f-9b49-bbccce7024c9-kube-api-access-zvk92" (OuterVolumeSpecName: "kube-api-access-zvk92") pod "0665207d-e3ae-452f-9b49-bbccce7024c9" (UID: "0665207d-e3ae-452f-9b49-bbccce7024c9"). InnerVolumeSpecName "kube-api-access-zvk92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.721037 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0665207d-e3ae-452f-9b49-bbccce7024c9" (UID: "0665207d-e3ae-452f-9b49-bbccce7024c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.779699 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvk92\" (UniqueName: \"kubernetes.io/projected/0665207d-e3ae-452f-9b49-bbccce7024c9-kube-api-access-zvk92\") on node \"crc\" DevicePath \"\"" Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.779729 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:13:48 crc kubenswrapper[4992]: I1211 09:13:48.779738 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0665207d-e3ae-452f-9b49-bbccce7024c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.094743 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:13:49 crc kubenswrapper[4992]: E1211 09:13:49.095205 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.108304 4992 generic.go:334] "Generic (PLEG): container finished" podID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerID="692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a" exitCode=0 Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.108384 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrnzq" event={"ID":"0665207d-e3ae-452f-9b49-bbccce7024c9","Type":"ContainerDied","Data":"692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a"} Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.108423 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nrnzq" event={"ID":"0665207d-e3ae-452f-9b49-bbccce7024c9","Type":"ContainerDied","Data":"53ac797a06f813e355baeec9ba9c97ed43dfc35f304b840f2845048043a54747"} Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.108449 4992 scope.go:117] "RemoveContainer" containerID="692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.108464 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nrnzq" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.131203 4992 scope.go:117] "RemoveContainer" containerID="e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.146975 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nrnzq"] Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.157012 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nrnzq"] Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.168930 4992 scope.go:117] "RemoveContainer" containerID="1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.198532 4992 scope.go:117] "RemoveContainer" containerID="692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a" Dec 11 09:13:49 crc kubenswrapper[4992]: E1211 09:13:49.198946 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a\": container with ID starting with 692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a not found: ID does not exist" containerID="692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.198984 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a"} err="failed to get container status \"692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a\": rpc error: code = NotFound desc = could not find container \"692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a\": container with ID starting with 692dab767dfaa927a4576c479ff1da8ebba35b9d9ddf9734c78ae677c922a80a not found: ID does not exist" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.199015 4992 scope.go:117] "RemoveContainer" containerID="e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe" Dec 11 09:13:49 crc kubenswrapper[4992]: E1211 09:13:49.199369 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe\": container with ID starting with e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe not found: ID does not exist" containerID="e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.199398 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe"} err="failed to get container status \"e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe\": rpc error: code = NotFound desc = could not find container \"e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe\": container with ID starting with e002a747a2ec562900e27a003ebeea641c81af5cf59df20bf00145a1b5e5c5fe not found: ID does not exist" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.199415 4992 scope.go:117] "RemoveContainer" containerID="1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671" Dec 11 09:13:49 crc kubenswrapper[4992]: E1211 09:13:49.199648 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671\": container with ID starting with 1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671 not found: ID does not exist" containerID="1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671" Dec 11 09:13:49 crc kubenswrapper[4992]: I1211 09:13:49.199676 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671"} err="failed to get container status \"1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671\": rpc error: code = NotFound desc = could not find container \"1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671\": container with ID starting with 1470dd032cb8f77c49d2dd2fe37243c6e20157adecc241b387c87334b847b671 not found: ID does not exist" Dec 11 09:13:50 crc kubenswrapper[4992]: I1211 09:13:50.104966 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0665207d-e3ae-452f-9b49-bbccce7024c9" path="/var/lib/kubelet/pods/0665207d-e3ae-452f-9b49-bbccce7024c9/volumes" Dec 11 09:14:02 crc kubenswrapper[4992]: I1211 09:14:02.095022 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:14:02 crc kubenswrapper[4992]: E1211 09:14:02.095844 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:14:15 crc kubenswrapper[4992]: I1211 09:14:15.095446 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:14:15 crc kubenswrapper[4992]: E1211 09:14:15.096344 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:14:27 crc kubenswrapper[4992]: I1211 09:14:27.095243 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:14:27 crc kubenswrapper[4992]: E1211 09:14:27.095943 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:14:38 crc kubenswrapper[4992]: I1211 09:14:38.095683 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:14:38 crc kubenswrapper[4992]: E1211 09:14:38.096508 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:14:49 crc kubenswrapper[4992]: I1211 09:14:49.095586 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:14:49 crc kubenswrapper[4992]: E1211 09:14:49.096473 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.151117 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x"] Dec 11 09:15:00 crc kubenswrapper[4992]: E1211 09:15:00.152320 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerName="extract-content" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.152345 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerName="extract-content" Dec 11 09:15:00 crc kubenswrapper[4992]: E1211 09:15:00.152387 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerName="extract-utilities" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.152398 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerName="extract-utilities" Dec 11 09:15:00 crc kubenswrapper[4992]: E1211 09:15:00.152427 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerName="registry-server" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.152441 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerName="registry-server" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.152714 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="0665207d-e3ae-452f-9b49-bbccce7024c9" containerName="registry-server" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.153555 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.157832 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.162359 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x"] Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.166003 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.274813 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgwfm\" (UniqueName: \"kubernetes.io/projected/dabf0a12-b12b-478a-883d-b72c49d7f55b-kube-api-access-sgwfm\") pod \"collect-profiles-29424075-6jt8x\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.275026 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dabf0a12-b12b-478a-883d-b72c49d7f55b-config-volume\") pod \"collect-profiles-29424075-6jt8x\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.275080 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dabf0a12-b12b-478a-883d-b72c49d7f55b-secret-volume\") pod \"collect-profiles-29424075-6jt8x\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.377578 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgwfm\" (UniqueName: \"kubernetes.io/projected/dabf0a12-b12b-478a-883d-b72c49d7f55b-kube-api-access-sgwfm\") pod \"collect-profiles-29424075-6jt8x\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.377741 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dabf0a12-b12b-478a-883d-b72c49d7f55b-config-volume\") pod \"collect-profiles-29424075-6jt8x\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.377785 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dabf0a12-b12b-478a-883d-b72c49d7f55b-secret-volume\") pod \"collect-profiles-29424075-6jt8x\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.379486 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dabf0a12-b12b-478a-883d-b72c49d7f55b-config-volume\") pod \"collect-profiles-29424075-6jt8x\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.390846 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dabf0a12-b12b-478a-883d-b72c49d7f55b-secret-volume\") pod \"collect-profiles-29424075-6jt8x\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.397109 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgwfm\" (UniqueName: \"kubernetes.io/projected/dabf0a12-b12b-478a-883d-b72c49d7f55b-kube-api-access-sgwfm\") pod \"collect-profiles-29424075-6jt8x\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.488497 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:00 crc kubenswrapper[4992]: I1211 09:15:00.950267 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x"] Dec 11 09:15:01 crc kubenswrapper[4992]: I1211 09:15:01.723232 4992 generic.go:334] "Generic (PLEG): container finished" podID="dabf0a12-b12b-478a-883d-b72c49d7f55b" containerID="a411072cded6d611b8bd03fc120b3070e9825bfedccf5dac1f22c38942a5066c" exitCode=0 Dec 11 09:15:01 crc kubenswrapper[4992]: I1211 09:15:01.723283 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" event={"ID":"dabf0a12-b12b-478a-883d-b72c49d7f55b","Type":"ContainerDied","Data":"a411072cded6d611b8bd03fc120b3070e9825bfedccf5dac1f22c38942a5066c"} Dec 11 09:15:01 crc kubenswrapper[4992]: I1211 09:15:01.723550 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" event={"ID":"dabf0a12-b12b-478a-883d-b72c49d7f55b","Type":"ContainerStarted","Data":"6b95c56cdd4944d6f1b5ab244c4647378bbbbe9b7e18ea09245ba8074abda389"} Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.095161 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.095251 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:03 crc kubenswrapper[4992]: E1211 09:15:03.095802 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.255387 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgwfm\" (UniqueName: \"kubernetes.io/projected/dabf0a12-b12b-478a-883d-b72c49d7f55b-kube-api-access-sgwfm\") pod \"dabf0a12-b12b-478a-883d-b72c49d7f55b\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.255539 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dabf0a12-b12b-478a-883d-b72c49d7f55b-secret-volume\") pod \"dabf0a12-b12b-478a-883d-b72c49d7f55b\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.255691 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dabf0a12-b12b-478a-883d-b72c49d7f55b-config-volume\") pod \"dabf0a12-b12b-478a-883d-b72c49d7f55b\" (UID: \"dabf0a12-b12b-478a-883d-b72c49d7f55b\") " Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.262454 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabf0a12-b12b-478a-883d-b72c49d7f55b-config-volume" (OuterVolumeSpecName: "config-volume") pod "dabf0a12-b12b-478a-883d-b72c49d7f55b" (UID: "dabf0a12-b12b-478a-883d-b72c49d7f55b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.263865 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabf0a12-b12b-478a-883d-b72c49d7f55b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dabf0a12-b12b-478a-883d-b72c49d7f55b" (UID: "dabf0a12-b12b-478a-883d-b72c49d7f55b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.270301 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabf0a12-b12b-478a-883d-b72c49d7f55b-kube-api-access-sgwfm" (OuterVolumeSpecName: "kube-api-access-sgwfm") pod "dabf0a12-b12b-478a-883d-b72c49d7f55b" (UID: "dabf0a12-b12b-478a-883d-b72c49d7f55b"). InnerVolumeSpecName "kube-api-access-sgwfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.358418 4992 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dabf0a12-b12b-478a-883d-b72c49d7f55b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.358453 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgwfm\" (UniqueName: \"kubernetes.io/projected/dabf0a12-b12b-478a-883d-b72c49d7f55b-kube-api-access-sgwfm\") on node \"crc\" DevicePath \"\"" Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.358469 4992 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dabf0a12-b12b-478a-883d-b72c49d7f55b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.742864 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" event={"ID":"dabf0a12-b12b-478a-883d-b72c49d7f55b","Type":"ContainerDied","Data":"6b95c56cdd4944d6f1b5ab244c4647378bbbbe9b7e18ea09245ba8074abda389"} Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.743247 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b95c56cdd4944d6f1b5ab244c4647378bbbbe9b7e18ea09245ba8074abda389" Dec 11 09:15:03 crc kubenswrapper[4992]: I1211 09:15:03.742947 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424075-6jt8x" Dec 11 09:15:04 crc kubenswrapper[4992]: I1211 09:15:04.169733 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg"] Dec 11 09:15:04 crc kubenswrapper[4992]: I1211 09:15:04.178363 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424030-qtgvg"] Dec 11 09:15:06 crc kubenswrapper[4992]: I1211 09:15:06.108554 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b54479f-8dc7-42ad-b2c5-993f72a43852" path="/var/lib/kubelet/pods/7b54479f-8dc7-42ad-b2c5-993f72a43852/volumes" Dec 11 09:15:13 crc kubenswrapper[4992]: I1211 09:15:13.882875 4992 scope.go:117] "RemoveContainer" containerID="538dfed5e4b65afad2c5f8340b03bcc04531c22ada6b28162392e52f162534b7" Dec 11 09:15:18 crc kubenswrapper[4992]: I1211 09:15:18.094936 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:15:18 crc kubenswrapper[4992]: E1211 09:15:18.096588 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:15:32 crc kubenswrapper[4992]: I1211 09:15:32.095686 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:15:32 crc kubenswrapper[4992]: E1211 09:15:32.096592 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:15:45 crc kubenswrapper[4992]: I1211 09:15:45.094921 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:15:45 crc kubenswrapper[4992]: E1211 09:15:45.095791 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:15:59 crc kubenswrapper[4992]: I1211 09:15:59.094960 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:15:59 crc kubenswrapper[4992]: E1211 09:15:59.095850 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:16:08 crc kubenswrapper[4992]: I1211 09:16:08.887340 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7frfr"] Dec 11 09:16:08 crc kubenswrapper[4992]: E1211 09:16:08.888332 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabf0a12-b12b-478a-883d-b72c49d7f55b" containerName="collect-profiles" Dec 11 09:16:08 crc kubenswrapper[4992]: I1211 09:16:08.888345 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabf0a12-b12b-478a-883d-b72c49d7f55b" containerName="collect-profiles" Dec 11 09:16:08 crc kubenswrapper[4992]: I1211 09:16:08.888519 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabf0a12-b12b-478a-883d-b72c49d7f55b" containerName="collect-profiles" Dec 11 09:16:08 crc kubenswrapper[4992]: I1211 09:16:08.889956 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:08 crc kubenswrapper[4992]: I1211 09:16:08.898907 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7frfr"] Dec 11 09:16:08 crc kubenswrapper[4992]: I1211 09:16:08.970779 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-utilities\") pod \"certified-operators-7frfr\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:08 crc kubenswrapper[4992]: I1211 09:16:08.970885 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpw9z\" (UniqueName: \"kubernetes.io/projected/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-kube-api-access-xpw9z\") pod \"certified-operators-7frfr\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:08 crc kubenswrapper[4992]: I1211 09:16:08.971168 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-catalog-content\") pod \"certified-operators-7frfr\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.072642 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpw9z\" (UniqueName: \"kubernetes.io/projected/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-kube-api-access-xpw9z\") pod \"certified-operators-7frfr\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.073095 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-catalog-content\") pod \"certified-operators-7frfr\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.073127 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-utilities\") pod \"certified-operators-7frfr\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.073586 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-catalog-content\") pod \"certified-operators-7frfr\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.073656 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-utilities\") pod \"certified-operators-7frfr\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.085209 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jrrk4"] Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.087107 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.102475 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrrk4"] Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.108957 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpw9z\" (UniqueName: \"kubernetes.io/projected/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-kube-api-access-xpw9z\") pod \"certified-operators-7frfr\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.175420 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-utilities\") pod \"redhat-marketplace-jrrk4\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.175802 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6c54\" (UniqueName: \"kubernetes.io/projected/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-kube-api-access-d6c54\") pod \"redhat-marketplace-jrrk4\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.175908 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-catalog-content\") pod \"redhat-marketplace-jrrk4\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.209679 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.277137 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-catalog-content\") pod \"redhat-marketplace-jrrk4\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.277190 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-utilities\") pod \"redhat-marketplace-jrrk4\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.277348 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6c54\" (UniqueName: \"kubernetes.io/projected/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-kube-api-access-d6c54\") pod \"redhat-marketplace-jrrk4\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.278516 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-catalog-content\") pod \"redhat-marketplace-jrrk4\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.278806 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-utilities\") pod \"redhat-marketplace-jrrk4\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.298446 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6c54\" (UniqueName: \"kubernetes.io/projected/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-kube-api-access-d6c54\") pod \"redhat-marketplace-jrrk4\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.405248 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:09 crc kubenswrapper[4992]: W1211 09:16:09.772764 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1bfdb7e_cc82_493f_9c90_0349e2f9c0e0.slice/crio-557a132733ab41066b537fbff34030ce8cd7fdbf5c71268d7c0ced395d0779d8 WatchSource:0}: Error finding container 557a132733ab41066b537fbff34030ce8cd7fdbf5c71268d7c0ced395d0779d8: Status 404 returned error can't find the container with id 557a132733ab41066b537fbff34030ce8cd7fdbf5c71268d7c0ced395d0779d8 Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.781024 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7frfr"] Dec 11 09:16:09 crc kubenswrapper[4992]: I1211 09:16:09.982020 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrrk4"] Dec 11 09:16:10 crc kubenswrapper[4992]: I1211 09:16:10.095996 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:16:10 crc kubenswrapper[4992]: E1211 09:16:10.096242 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:16:10 crc kubenswrapper[4992]: I1211 09:16:10.321046 4992 generic.go:334] "Generic (PLEG): container finished" podID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerID="16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3" exitCode=0 Dec 11 09:16:10 crc kubenswrapper[4992]: I1211 09:16:10.321098 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrrk4" event={"ID":"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456","Type":"ContainerDied","Data":"16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3"} Dec 11 09:16:10 crc kubenswrapper[4992]: I1211 09:16:10.321152 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrrk4" event={"ID":"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456","Type":"ContainerStarted","Data":"6c902616c3e4d5793075e3be2f55c70133aac8f086711743b7404102a5f5f6ea"} Dec 11 09:16:10 crc kubenswrapper[4992]: I1211 09:16:10.323395 4992 generic.go:334] "Generic (PLEG): container finished" podID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerID="d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a" exitCode=0 Dec 11 09:16:10 crc kubenswrapper[4992]: I1211 09:16:10.323437 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7frfr" event={"ID":"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0","Type":"ContainerDied","Data":"d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a"} Dec 11 09:16:10 crc kubenswrapper[4992]: I1211 09:16:10.323462 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7frfr" event={"ID":"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0","Type":"ContainerStarted","Data":"557a132733ab41066b537fbff34030ce8cd7fdbf5c71268d7c0ced395d0779d8"} Dec 11 09:16:11 crc kubenswrapper[4992]: I1211 09:16:11.336342 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7frfr" event={"ID":"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0","Type":"ContainerStarted","Data":"7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34"} Dec 11 09:16:11 crc kubenswrapper[4992]: I1211 09:16:11.337956 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrrk4" event={"ID":"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456","Type":"ContainerStarted","Data":"e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2"} Dec 11 09:16:13 crc kubenswrapper[4992]: I1211 09:16:13.361954 4992 generic.go:334] "Generic (PLEG): container finished" podID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerID="e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2" exitCode=0 Dec 11 09:16:13 crc kubenswrapper[4992]: I1211 09:16:13.362030 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrrk4" event={"ID":"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456","Type":"ContainerDied","Data":"e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2"} Dec 11 09:16:15 crc kubenswrapper[4992]: I1211 09:16:15.389084 4992 generic.go:334] "Generic (PLEG): container finished" podID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerID="7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34" exitCode=0 Dec 11 09:16:15 crc kubenswrapper[4992]: I1211 09:16:15.389156 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7frfr" event={"ID":"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0","Type":"ContainerDied","Data":"7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34"} Dec 11 09:16:16 crc kubenswrapper[4992]: I1211 09:16:16.401340 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrrk4" event={"ID":"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456","Type":"ContainerStarted","Data":"a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d"} Dec 11 09:16:16 crc kubenswrapper[4992]: I1211 09:16:16.421153 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jrrk4" podStartSLOduration=2.500973036 podStartE2EDuration="7.421128559s" podCreationTimestamp="2025-12-11 09:16:09 +0000 UTC" firstStartedPulling="2025-12-11 09:16:10.322966855 +0000 UTC m=+3194.582440781" lastFinishedPulling="2025-12-11 09:16:15.243122378 +0000 UTC m=+3199.502596304" observedRunningTime="2025-12-11 09:16:16.418251998 +0000 UTC m=+3200.677725934" watchObservedRunningTime="2025-12-11 09:16:16.421128559 +0000 UTC m=+3200.680602485" Dec 11 09:16:17 crc kubenswrapper[4992]: I1211 09:16:17.413949 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7frfr" event={"ID":"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0","Type":"ContainerStarted","Data":"f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202"} Dec 11 09:16:19 crc kubenswrapper[4992]: I1211 09:16:19.210073 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:19 crc kubenswrapper[4992]: I1211 09:16:19.210365 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:19 crc kubenswrapper[4992]: I1211 09:16:19.253747 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:19 crc kubenswrapper[4992]: I1211 09:16:19.273425 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7frfr" podStartSLOduration=5.481220788 podStartE2EDuration="11.273406684s" podCreationTimestamp="2025-12-11 09:16:08 +0000 UTC" firstStartedPulling="2025-12-11 09:16:10.325535468 +0000 UTC m=+3194.585009414" lastFinishedPulling="2025-12-11 09:16:16.117721384 +0000 UTC m=+3200.377195310" observedRunningTime="2025-12-11 09:16:17.441399794 +0000 UTC m=+3201.700873740" watchObservedRunningTime="2025-12-11 09:16:19.273406684 +0000 UTC m=+3203.532880610" Dec 11 09:16:19 crc kubenswrapper[4992]: I1211 09:16:19.407958 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:19 crc kubenswrapper[4992]: I1211 09:16:19.408035 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:19 crc kubenswrapper[4992]: I1211 09:16:19.457139 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:23 crc kubenswrapper[4992]: I1211 09:16:23.095683 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:16:23 crc kubenswrapper[4992]: E1211 09:16:23.096263 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:16:29 crc kubenswrapper[4992]: I1211 09:16:29.256868 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:29 crc kubenswrapper[4992]: I1211 09:16:29.313412 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7frfr"] Dec 11 09:16:29 crc kubenswrapper[4992]: I1211 09:16:29.461485 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:29 crc kubenswrapper[4992]: I1211 09:16:29.521927 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7frfr" podUID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerName="registry-server" containerID="cri-o://f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202" gracePeriod=2 Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.007087 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.106801 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-catalog-content\") pod \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.107056 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-utilities\") pod \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.107094 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpw9z\" (UniqueName: \"kubernetes.io/projected/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-kube-api-access-xpw9z\") pod \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\" (UID: \"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0\") " Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.108041 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-utilities" (OuterVolumeSpecName: "utilities") pod "f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" (UID: "f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.113906 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-kube-api-access-xpw9z" (OuterVolumeSpecName: "kube-api-access-xpw9z") pod "f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" (UID: "f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0"). InnerVolumeSpecName "kube-api-access-xpw9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.158518 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" (UID: "f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.209165 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.209194 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpw9z\" (UniqueName: \"kubernetes.io/projected/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-kube-api-access-xpw9z\") on node \"crc\" DevicePath \"\"" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.209204 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.533156 4992 generic.go:334] "Generic (PLEG): container finished" podID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerID="f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202" exitCode=0 Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.533223 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7frfr" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.533261 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7frfr" event={"ID":"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0","Type":"ContainerDied","Data":"f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202"} Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.533620 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7frfr" event={"ID":"f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0","Type":"ContainerDied","Data":"557a132733ab41066b537fbff34030ce8cd7fdbf5c71268d7c0ced395d0779d8"} Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.533670 4992 scope.go:117] "RemoveContainer" containerID="f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.558910 4992 scope.go:117] "RemoveContainer" containerID="7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.571802 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7frfr"] Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.582675 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7frfr"] Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.595672 4992 scope.go:117] "RemoveContainer" containerID="d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.623608 4992 scope.go:117] "RemoveContainer" containerID="f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202" Dec 11 09:16:30 crc kubenswrapper[4992]: E1211 09:16:30.625107 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202\": container with ID starting with f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202 not found: ID does not exist" containerID="f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.625153 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202"} err="failed to get container status \"f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202\": rpc error: code = NotFound desc = could not find container \"f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202\": container with ID starting with f47a84e384b462da90e9cad57bd4b667789bd97438549e8f4751882b2472f202 not found: ID does not exist" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.625181 4992 scope.go:117] "RemoveContainer" containerID="7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34" Dec 11 09:16:30 crc kubenswrapper[4992]: E1211 09:16:30.625553 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34\": container with ID starting with 7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34 not found: ID does not exist" containerID="7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.625586 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34"} err="failed to get container status \"7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34\": rpc error: code = NotFound desc = could not find container \"7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34\": container with ID starting with 7e57822bb0c4c477e85906031796312d02830939526a46b9a7043897f4cadf34 not found: ID does not exist" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.625601 4992 scope.go:117] "RemoveContainer" containerID="d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a" Dec 11 09:16:30 crc kubenswrapper[4992]: E1211 09:16:30.625912 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a\": container with ID starting with d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a not found: ID does not exist" containerID="d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a" Dec 11 09:16:30 crc kubenswrapper[4992]: I1211 09:16:30.625938 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a"} err="failed to get container status \"d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a\": rpc error: code = NotFound desc = could not find container \"d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a\": container with ID starting with d242c4c0ce731c23e992b8fbcf7d0eea9084e9a5327c057b382f5178928b3f3a not found: ID does not exist" Dec 11 09:16:31 crc kubenswrapper[4992]: I1211 09:16:31.694256 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrrk4"] Dec 11 09:16:31 crc kubenswrapper[4992]: I1211 09:16:31.695066 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jrrk4" podUID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerName="registry-server" containerID="cri-o://a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d" gracePeriod=2 Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.108120 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" path="/var/lib/kubelet/pods/f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0/volumes" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.289088 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.364517 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-catalog-content\") pod \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.364580 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6c54\" (UniqueName: \"kubernetes.io/projected/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-kube-api-access-d6c54\") pod \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.364616 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-utilities\") pod \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\" (UID: \"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456\") " Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.365670 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-utilities" (OuterVolumeSpecName: "utilities") pod "4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" (UID: "4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.372304 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-kube-api-access-d6c54" (OuterVolumeSpecName: "kube-api-access-d6c54") pod "4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" (UID: "4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456"). InnerVolumeSpecName "kube-api-access-d6c54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.386500 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" (UID: "4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.466993 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.467030 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6c54\" (UniqueName: \"kubernetes.io/projected/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-kube-api-access-d6c54\") on node \"crc\" DevicePath \"\"" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.467039 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.559603 4992 generic.go:334] "Generic (PLEG): container finished" podID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerID="a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d" exitCode=0 Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.559695 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jrrk4" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.559720 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrrk4" event={"ID":"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456","Type":"ContainerDied","Data":"a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d"} Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.559783 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jrrk4" event={"ID":"4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456","Type":"ContainerDied","Data":"6c902616c3e4d5793075e3be2f55c70133aac8f086711743b7404102a5f5f6ea"} Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.559807 4992 scope.go:117] "RemoveContainer" containerID="a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.595341 4992 scope.go:117] "RemoveContainer" containerID="e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.607724 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrrk4"] Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.618541 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jrrk4"] Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.625936 4992 scope.go:117] "RemoveContainer" containerID="16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.675810 4992 scope.go:117] "RemoveContainer" containerID="a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d" Dec 11 09:16:32 crc kubenswrapper[4992]: E1211 09:16:32.676299 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d\": container with ID starting with a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d not found: ID does not exist" containerID="a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.676332 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d"} err="failed to get container status \"a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d\": rpc error: code = NotFound desc = could not find container \"a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d\": container with ID starting with a7fece970c89d75838cae69c461da4de456638ecb2594d6ae9986528c630cd9d not found: ID does not exist" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.676355 4992 scope.go:117] "RemoveContainer" containerID="e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2" Dec 11 09:16:32 crc kubenswrapper[4992]: E1211 09:16:32.676679 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2\": container with ID starting with e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2 not found: ID does not exist" containerID="e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.676713 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2"} err="failed to get container status \"e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2\": rpc error: code = NotFound desc = could not find container \"e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2\": container with ID starting with e803fabb7fcb3277741ce41d187146a1c6f4aa167b7fa6b111bbe0a2dd465cf2 not found: ID does not exist" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.676738 4992 scope.go:117] "RemoveContainer" containerID="16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3" Dec 11 09:16:32 crc kubenswrapper[4992]: E1211 09:16:32.677022 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3\": container with ID starting with 16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3 not found: ID does not exist" containerID="16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3" Dec 11 09:16:32 crc kubenswrapper[4992]: I1211 09:16:32.677042 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3"} err="failed to get container status \"16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3\": rpc error: code = NotFound desc = could not find container \"16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3\": container with ID starting with 16de2c9378bf32e6d6d81558481b0f4acd372f6cc1a149cbe19ba84b5c85a0d3 not found: ID does not exist" Dec 11 09:16:34 crc kubenswrapper[4992]: I1211 09:16:34.111762 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" path="/var/lib/kubelet/pods/4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456/volumes" Dec 11 09:16:35 crc kubenswrapper[4992]: I1211 09:16:35.095299 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:16:35 crc kubenswrapper[4992]: E1211 09:16:35.095934 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:16:50 crc kubenswrapper[4992]: I1211 09:16:50.095042 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:16:50 crc kubenswrapper[4992]: E1211 09:16:50.096100 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:17:05 crc kubenswrapper[4992]: I1211 09:17:05.095618 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:17:05 crc kubenswrapper[4992]: E1211 09:17:05.096625 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:17:16 crc kubenswrapper[4992]: I1211 09:17:16.102744 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:17:16 crc kubenswrapper[4992]: E1211 09:17:16.103563 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:17:30 crc kubenswrapper[4992]: I1211 09:17:30.095034 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:17:30 crc kubenswrapper[4992]: E1211 09:17:30.095902 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:17:43 crc kubenswrapper[4992]: I1211 09:17:43.096015 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:17:43 crc kubenswrapper[4992]: E1211 09:17:43.097020 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:17:57 crc kubenswrapper[4992]: I1211 09:17:57.095760 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:17:57 crc kubenswrapper[4992]: E1211 09:17:57.098446 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:18:09 crc kubenswrapper[4992]: I1211 09:18:09.094958 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:18:09 crc kubenswrapper[4992]: I1211 09:18:09.473587 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"40458e4ef0a84225e9d2bce5983bf5853db92fe0d6f20c0aa5b66108f1866c20"} Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.710525 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5zvbp"] Dec 11 09:18:13 crc kubenswrapper[4992]: E1211 09:18:13.711461 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerName="extract-content" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.711475 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerName="extract-content" Dec 11 09:18:13 crc kubenswrapper[4992]: E1211 09:18:13.711492 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerName="extract-utilities" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.711498 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerName="extract-utilities" Dec 11 09:18:13 crc kubenswrapper[4992]: E1211 09:18:13.711517 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerName="extract-content" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.711524 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerName="extract-content" Dec 11 09:18:13 crc kubenswrapper[4992]: E1211 09:18:13.711539 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerName="registry-server" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.711545 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerName="registry-server" Dec 11 09:18:13 crc kubenswrapper[4992]: E1211 09:18:13.711560 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerName="extract-utilities" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.711566 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerName="extract-utilities" Dec 11 09:18:13 crc kubenswrapper[4992]: E1211 09:18:13.711589 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerName="registry-server" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.711595 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerName="registry-server" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.711797 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d8ceae4-b8e9-4d1e-9886-eb0d21c3e456" containerName="registry-server" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.711814 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1bfdb7e-cc82-493f-9c90-0349e2f9c0e0" containerName="registry-server" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.713151 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.725297 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zvbp"] Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.852295 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lfxq\" (UniqueName: \"kubernetes.io/projected/444c6cb3-2e67-4abf-8701-7f7e48937898-kube-api-access-2lfxq\") pod \"redhat-operators-5zvbp\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.852517 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-utilities\") pod \"redhat-operators-5zvbp\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.852762 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-catalog-content\") pod \"redhat-operators-5zvbp\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.954210 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lfxq\" (UniqueName: \"kubernetes.io/projected/444c6cb3-2e67-4abf-8701-7f7e48937898-kube-api-access-2lfxq\") pod \"redhat-operators-5zvbp\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.954310 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-utilities\") pod \"redhat-operators-5zvbp\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.954376 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-catalog-content\") pod \"redhat-operators-5zvbp\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.954923 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-catalog-content\") pod \"redhat-operators-5zvbp\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.955142 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-utilities\") pod \"redhat-operators-5zvbp\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:13 crc kubenswrapper[4992]: I1211 09:18:13.989895 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lfxq\" (UniqueName: \"kubernetes.io/projected/444c6cb3-2e67-4abf-8701-7f7e48937898-kube-api-access-2lfxq\") pod \"redhat-operators-5zvbp\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:14 crc kubenswrapper[4992]: I1211 09:18:14.033288 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:14 crc kubenswrapper[4992]: I1211 09:18:14.495446 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zvbp"] Dec 11 09:18:14 crc kubenswrapper[4992]: W1211 09:18:14.518862 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod444c6cb3_2e67_4abf_8701_7f7e48937898.slice/crio-a3c6cfe61d28c26191524edfa4124fc56d145771a0b74905fd69ad066b9e32eb WatchSource:0}: Error finding container a3c6cfe61d28c26191524edfa4124fc56d145771a0b74905fd69ad066b9e32eb: Status 404 returned error can't find the container with id a3c6cfe61d28c26191524edfa4124fc56d145771a0b74905fd69ad066b9e32eb Dec 11 09:18:15 crc kubenswrapper[4992]: I1211 09:18:15.530834 4992 generic.go:334] "Generic (PLEG): container finished" podID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerID="d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db" exitCode=0 Dec 11 09:18:15 crc kubenswrapper[4992]: I1211 09:18:15.530946 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zvbp" event={"ID":"444c6cb3-2e67-4abf-8701-7f7e48937898","Type":"ContainerDied","Data":"d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db"} Dec 11 09:18:15 crc kubenswrapper[4992]: I1211 09:18:15.531209 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zvbp" event={"ID":"444c6cb3-2e67-4abf-8701-7f7e48937898","Type":"ContainerStarted","Data":"a3c6cfe61d28c26191524edfa4124fc56d145771a0b74905fd69ad066b9e32eb"} Dec 11 09:18:15 crc kubenswrapper[4992]: I1211 09:18:15.532967 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 09:18:16 crc kubenswrapper[4992]: I1211 09:18:16.541816 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zvbp" event={"ID":"444c6cb3-2e67-4abf-8701-7f7e48937898","Type":"ContainerStarted","Data":"9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a"} Dec 11 09:18:19 crc kubenswrapper[4992]: I1211 09:18:19.568054 4992 generic.go:334] "Generic (PLEG): container finished" podID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerID="9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a" exitCode=0 Dec 11 09:18:19 crc kubenswrapper[4992]: I1211 09:18:19.568156 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zvbp" event={"ID":"444c6cb3-2e67-4abf-8701-7f7e48937898","Type":"ContainerDied","Data":"9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a"} Dec 11 09:18:21 crc kubenswrapper[4992]: I1211 09:18:21.595795 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zvbp" event={"ID":"444c6cb3-2e67-4abf-8701-7f7e48937898","Type":"ContainerStarted","Data":"7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5"} Dec 11 09:18:21 crc kubenswrapper[4992]: I1211 09:18:21.616135 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5zvbp" podStartSLOduration=3.628277534 podStartE2EDuration="8.616114487s" podCreationTimestamp="2025-12-11 09:18:13 +0000 UTC" firstStartedPulling="2025-12-11 09:18:15.532688452 +0000 UTC m=+3319.792162378" lastFinishedPulling="2025-12-11 09:18:20.520525395 +0000 UTC m=+3324.779999331" observedRunningTime="2025-12-11 09:18:21.614356453 +0000 UTC m=+3325.873830379" watchObservedRunningTime="2025-12-11 09:18:21.616114487 +0000 UTC m=+3325.875588423" Dec 11 09:18:24 crc kubenswrapper[4992]: I1211 09:18:24.035200 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:24 crc kubenswrapper[4992]: I1211 09:18:24.035599 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:25 crc kubenswrapper[4992]: I1211 09:18:25.086472 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5zvbp" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerName="registry-server" probeResult="failure" output=< Dec 11 09:18:25 crc kubenswrapper[4992]: timeout: failed to connect service ":50051" within 1s Dec 11 09:18:25 crc kubenswrapper[4992]: > Dec 11 09:18:34 crc kubenswrapper[4992]: I1211 09:18:34.083448 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:34 crc kubenswrapper[4992]: I1211 09:18:34.141469 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:34 crc kubenswrapper[4992]: I1211 09:18:34.324489 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zvbp"] Dec 11 09:18:35 crc kubenswrapper[4992]: I1211 09:18:35.708617 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5zvbp" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerName="registry-server" containerID="cri-o://7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5" gracePeriod=2 Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.190893 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.282513 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-catalog-content\") pod \"444c6cb3-2e67-4abf-8701-7f7e48937898\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.282742 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lfxq\" (UniqueName: \"kubernetes.io/projected/444c6cb3-2e67-4abf-8701-7f7e48937898-kube-api-access-2lfxq\") pod \"444c6cb3-2e67-4abf-8701-7f7e48937898\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.282764 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-utilities\") pod \"444c6cb3-2e67-4abf-8701-7f7e48937898\" (UID: \"444c6cb3-2e67-4abf-8701-7f7e48937898\") " Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.284077 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-utilities" (OuterVolumeSpecName: "utilities") pod "444c6cb3-2e67-4abf-8701-7f7e48937898" (UID: "444c6cb3-2e67-4abf-8701-7f7e48937898"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.292148 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444c6cb3-2e67-4abf-8701-7f7e48937898-kube-api-access-2lfxq" (OuterVolumeSpecName: "kube-api-access-2lfxq") pod "444c6cb3-2e67-4abf-8701-7f7e48937898" (UID: "444c6cb3-2e67-4abf-8701-7f7e48937898"). InnerVolumeSpecName "kube-api-access-2lfxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.385089 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lfxq\" (UniqueName: \"kubernetes.io/projected/444c6cb3-2e67-4abf-8701-7f7e48937898-kube-api-access-2lfxq\") on node \"crc\" DevicePath \"\"" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.385126 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.398268 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "444c6cb3-2e67-4abf-8701-7f7e48937898" (UID: "444c6cb3-2e67-4abf-8701-7f7e48937898"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.487198 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444c6cb3-2e67-4abf-8701-7f7e48937898-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.720962 4992 generic.go:334] "Generic (PLEG): container finished" podID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerID="7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5" exitCode=0 Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.721012 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zvbp" event={"ID":"444c6cb3-2e67-4abf-8701-7f7e48937898","Type":"ContainerDied","Data":"7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5"} Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.721025 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zvbp" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.721041 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zvbp" event={"ID":"444c6cb3-2e67-4abf-8701-7f7e48937898","Type":"ContainerDied","Data":"a3c6cfe61d28c26191524edfa4124fc56d145771a0b74905fd69ad066b9e32eb"} Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.721063 4992 scope.go:117] "RemoveContainer" containerID="7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.751175 4992 scope.go:117] "RemoveContainer" containerID="9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.757853 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zvbp"] Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.770432 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5zvbp"] Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.781723 4992 scope.go:117] "RemoveContainer" containerID="d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.837969 4992 scope.go:117] "RemoveContainer" containerID="7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5" Dec 11 09:18:36 crc kubenswrapper[4992]: E1211 09:18:36.838525 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5\": container with ID starting with 7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5 not found: ID does not exist" containerID="7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.838580 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5"} err="failed to get container status \"7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5\": rpc error: code = NotFound desc = could not find container \"7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5\": container with ID starting with 7a1a95f08279d872bef3423dee09a6c07d491396164021c72bbfc94029c1dda5 not found: ID does not exist" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.838618 4992 scope.go:117] "RemoveContainer" containerID="9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a" Dec 11 09:18:36 crc kubenswrapper[4992]: E1211 09:18:36.839012 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a\": container with ID starting with 9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a not found: ID does not exist" containerID="9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.839061 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a"} err="failed to get container status \"9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a\": rpc error: code = NotFound desc = could not find container \"9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a\": container with ID starting with 9ba4be5b7e3acbead6d354e6256b872dc075271b6f74e346d5ae2b70e578ba9a not found: ID does not exist" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.839097 4992 scope.go:117] "RemoveContainer" containerID="d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db" Dec 11 09:18:36 crc kubenswrapper[4992]: E1211 09:18:36.839356 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db\": container with ID starting with d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db not found: ID does not exist" containerID="d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db" Dec 11 09:18:36 crc kubenswrapper[4992]: I1211 09:18:36.839376 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db"} err="failed to get container status \"d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db\": rpc error: code = NotFound desc = could not find container \"d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db\": container with ID starting with d7a07338c957a84ae06d0deb2fdcee958e2e1beb76da2319952c7cd9d0ba52db not found: ID does not exist" Dec 11 09:18:38 crc kubenswrapper[4992]: I1211 09:18:38.107239 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" path="/var/lib/kubelet/pods/444c6cb3-2e67-4abf-8701-7f7e48937898/volumes" Dec 11 09:20:35 crc kubenswrapper[4992]: I1211 09:20:35.378395 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:20:35 crc kubenswrapper[4992]: I1211 09:20:35.378994 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:21:05 crc kubenswrapper[4992]: I1211 09:21:05.378607 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:21:05 crc kubenswrapper[4992]: I1211 09:21:05.379219 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:21:35 crc kubenswrapper[4992]: I1211 09:21:35.390188 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:21:35 crc kubenswrapper[4992]: I1211 09:21:35.390766 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:21:35 crc kubenswrapper[4992]: I1211 09:21:35.390815 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 09:21:35 crc kubenswrapper[4992]: I1211 09:21:35.391542 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40458e4ef0a84225e9d2bce5983bf5853db92fe0d6f20c0aa5b66108f1866c20"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 09:21:35 crc kubenswrapper[4992]: I1211 09:21:35.391598 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://40458e4ef0a84225e9d2bce5983bf5853db92fe0d6f20c0aa5b66108f1866c20" gracePeriod=600 Dec 11 09:21:36 crc kubenswrapper[4992]: I1211 09:21:36.327643 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="40458e4ef0a84225e9d2bce5983bf5853db92fe0d6f20c0aa5b66108f1866c20" exitCode=0 Dec 11 09:21:36 crc kubenswrapper[4992]: I1211 09:21:36.327687 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"40458e4ef0a84225e9d2bce5983bf5853db92fe0d6f20c0aa5b66108f1866c20"} Dec 11 09:21:36 crc kubenswrapper[4992]: I1211 09:21:36.328179 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691"} Dec 11 09:21:36 crc kubenswrapper[4992]: I1211 09:21:36.328203 4992 scope.go:117] "RemoveContainer" containerID="ae413915cde1fc4d4e3ba00f40bb3ea3c2bb66152b7e66d40d7ea0b5cfecd724" Dec 11 09:23:35 crc kubenswrapper[4992]: I1211 09:23:35.378330 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:23:35 crc kubenswrapper[4992]: I1211 09:23:35.379189 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:24:05 crc kubenswrapper[4992]: I1211 09:24:05.378986 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:24:05 crc kubenswrapper[4992]: I1211 09:24:05.379578 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:24:18 crc kubenswrapper[4992]: I1211 09:24:18.510583 4992 generic.go:334] "Generic (PLEG): container finished" podID="79d2a033-d073-439d-8d2c-779b95da30f4" containerID="e7ee74ec580574d9e66d2ab171ef9f7607c3ea202bfdebaa5c4cf3e4a8b7331c" exitCode=0 Dec 11 09:24:18 crc kubenswrapper[4992]: I1211 09:24:18.510681 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"79d2a033-d073-439d-8d2c-779b95da30f4","Type":"ContainerDied","Data":"e7ee74ec580574d9e66d2ab171ef9f7607c3ea202bfdebaa5c4cf3e4a8b7331c"} Dec 11 09:24:19 crc kubenswrapper[4992]: I1211 09:24:19.891686 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 09:24:19 crc kubenswrapper[4992]: I1211 09:24:19.993016 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ca-certs\") pod \"79d2a033-d073-439d-8d2c-779b95da30f4\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " Dec 11 09:24:19 crc kubenswrapper[4992]: I1211 09:24:19.993134 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-temporary\") pod \"79d2a033-d073-439d-8d2c-779b95da30f4\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " Dec 11 09:24:19 crc kubenswrapper[4992]: I1211 09:24:19.993240 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-workdir\") pod \"79d2a033-d073-439d-8d2c-779b95da30f4\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " Dec 11 09:24:19 crc kubenswrapper[4992]: I1211 09:24:19.993347 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ssh-key\") pod \"79d2a033-d073-439d-8d2c-779b95da30f4\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " Dec 11 09:24:19 crc kubenswrapper[4992]: I1211 09:24:19.993842 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "79d2a033-d073-439d-8d2c-779b95da30f4" (UID: "79d2a033-d073-439d-8d2c-779b95da30f4"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:24:19 crc kubenswrapper[4992]: I1211 09:24:19.999727 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "79d2a033-d073-439d-8d2c-779b95da30f4" (UID: "79d2a033-d073-439d-8d2c-779b95da30f4"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:24:19 crc kubenswrapper[4992]: I1211 09:24:19.999846 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79dtx\" (UniqueName: \"kubernetes.io/projected/79d2a033-d073-439d-8d2c-779b95da30f4-kube-api-access-79dtx\") pod \"79d2a033-d073-439d-8d2c-779b95da30f4\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.000258 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-config-data\") pod \"79d2a033-d073-439d-8d2c-779b95da30f4\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.000311 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config\") pod \"79d2a033-d073-439d-8d2c-779b95da30f4\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.000371 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"79d2a033-d073-439d-8d2c-779b95da30f4\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.000404 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config-secret\") pod \"79d2a033-d073-439d-8d2c-779b95da30f4\" (UID: \"79d2a033-d073-439d-8d2c-779b95da30f4\") " Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.000843 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-config-data" (OuterVolumeSpecName: "config-data") pod "79d2a033-d073-439d-8d2c-779b95da30f4" (UID: "79d2a033-d073-439d-8d2c-779b95da30f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.001623 4992 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.001670 4992 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/79d2a033-d073-439d-8d2c-779b95da30f4-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.001684 4992 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.005813 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "79d2a033-d073-439d-8d2c-779b95da30f4" (UID: "79d2a033-d073-439d-8d2c-779b95da30f4"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.006495 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d2a033-d073-439d-8d2c-779b95da30f4-kube-api-access-79dtx" (OuterVolumeSpecName: "kube-api-access-79dtx") pod "79d2a033-d073-439d-8d2c-779b95da30f4" (UID: "79d2a033-d073-439d-8d2c-779b95da30f4"). InnerVolumeSpecName "kube-api-access-79dtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.024432 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "79d2a033-d073-439d-8d2c-779b95da30f4" (UID: "79d2a033-d073-439d-8d2c-779b95da30f4"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.025404 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "79d2a033-d073-439d-8d2c-779b95da30f4" (UID: "79d2a033-d073-439d-8d2c-779b95da30f4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.034001 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "79d2a033-d073-439d-8d2c-779b95da30f4" (UID: "79d2a033-d073-439d-8d2c-779b95da30f4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.052182 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "79d2a033-d073-439d-8d2c-779b95da30f4" (UID: "79d2a033-d073-439d-8d2c-779b95da30f4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.104243 4992 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.104584 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79dtx\" (UniqueName: \"kubernetes.io/projected/79d2a033-d073-439d-8d2c-779b95da30f4-kube-api-access-79dtx\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.104725 4992 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.104830 4992 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.110241 4992 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.112617 4992 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/79d2a033-d073-439d-8d2c-779b95da30f4-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.126363 4992 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.214301 4992 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.529490 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"79d2a033-d073-439d-8d2c-779b95da30f4","Type":"ContainerDied","Data":"7eb765eec9e345ab2d697ffa2064f2c729255a737dfde2ab7adde2f4c41ab42c"} Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.529531 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb765eec9e345ab2d697ffa2064f2c729255a737dfde2ab7adde2f4c41ab42c" Dec 11 09:24:20 crc kubenswrapper[4992]: I1211 09:24:20.529531 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.216417 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 11 09:24:27 crc kubenswrapper[4992]: E1211 09:24:27.217540 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerName="registry-server" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.217559 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerName="registry-server" Dec 11 09:24:27 crc kubenswrapper[4992]: E1211 09:24:27.217579 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerName="extract-utilities" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.217587 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerName="extract-utilities" Dec 11 09:24:27 crc kubenswrapper[4992]: E1211 09:24:27.217603 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d2a033-d073-439d-8d2c-779b95da30f4" containerName="tempest-tests-tempest-tests-runner" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.217612 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d2a033-d073-439d-8d2c-779b95da30f4" containerName="tempest-tests-tempest-tests-runner" Dec 11 09:24:27 crc kubenswrapper[4992]: E1211 09:24:27.217657 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerName="extract-content" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.217666 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerName="extract-content" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.217904 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="444c6cb3-2e67-4abf-8701-7f7e48937898" containerName="registry-server" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.217937 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d2a033-d073-439d-8d2c-779b95da30f4" containerName="tempest-tests-tempest-tests-runner" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.218814 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.231150 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-t4bvx" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.234195 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.359137 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"216b8df4-225b-4a21-abf4-23a79bab418a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.359224 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc6pc\" (UniqueName: \"kubernetes.io/projected/216b8df4-225b-4a21-abf4-23a79bab418a-kube-api-access-dc6pc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"216b8df4-225b-4a21-abf4-23a79bab418a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.461498 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"216b8df4-225b-4a21-abf4-23a79bab418a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.461626 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc6pc\" (UniqueName: \"kubernetes.io/projected/216b8df4-225b-4a21-abf4-23a79bab418a-kube-api-access-dc6pc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"216b8df4-225b-4a21-abf4-23a79bab418a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.461971 4992 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"216b8df4-225b-4a21-abf4-23a79bab418a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.482776 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc6pc\" (UniqueName: \"kubernetes.io/projected/216b8df4-225b-4a21-abf4-23a79bab418a-kube-api-access-dc6pc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"216b8df4-225b-4a21-abf4-23a79bab418a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.492817 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"216b8df4-225b-4a21-abf4-23a79bab418a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 09:24:27 crc kubenswrapper[4992]: I1211 09:24:27.559210 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 09:24:28 crc kubenswrapper[4992]: I1211 09:24:28.017118 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 11 09:24:28 crc kubenswrapper[4992]: I1211 09:24:28.026194 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 09:24:28 crc kubenswrapper[4992]: I1211 09:24:28.612402 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"216b8df4-225b-4a21-abf4-23a79bab418a","Type":"ContainerStarted","Data":"29ff0387652327a9a892a3fd9f64a7034588bda90041642a47dc8cb22ea0ad15"} Dec 11 09:24:29 crc kubenswrapper[4992]: I1211 09:24:29.638852 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"216b8df4-225b-4a21-abf4-23a79bab418a","Type":"ContainerStarted","Data":"25bb81d0550fd38274fe6ccb27293e0f51b827af66cfb761592a570c8a322b5a"} Dec 11 09:24:29 crc kubenswrapper[4992]: I1211 09:24:29.663422 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.8605471150000001 podStartE2EDuration="2.663406042s" podCreationTimestamp="2025-12-11 09:24:27 +0000 UTC" firstStartedPulling="2025-12-11 09:24:28.025968795 +0000 UTC m=+3692.285442721" lastFinishedPulling="2025-12-11 09:24:28.828827682 +0000 UTC m=+3693.088301648" observedRunningTime="2025-12-11 09:24:29.659741272 +0000 UTC m=+3693.919215218" watchObservedRunningTime="2025-12-11 09:24:29.663406042 +0000 UTC m=+3693.922879968" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.513245 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c828g"] Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.516482 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.577458 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c828g"] Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.645197 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-utilities\") pod \"community-operators-c828g\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.645531 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-catalog-content\") pod \"community-operators-c828g\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.645743 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbl5\" (UniqueName: \"kubernetes.io/projected/798aa5ea-a72c-4792-9196-20eb38c86c81-kube-api-access-qqbl5\") pod \"community-operators-c828g\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.748161 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-utilities\") pod \"community-operators-c828g\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.748237 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-catalog-content\") pod \"community-operators-c828g\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.748341 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqbl5\" (UniqueName: \"kubernetes.io/projected/798aa5ea-a72c-4792-9196-20eb38c86c81-kube-api-access-qqbl5\") pod \"community-operators-c828g\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.748621 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-utilities\") pod \"community-operators-c828g\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.748654 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-catalog-content\") pod \"community-operators-c828g\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.769706 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqbl5\" (UniqueName: \"kubernetes.io/projected/798aa5ea-a72c-4792-9196-20eb38c86c81-kube-api-access-qqbl5\") pod \"community-operators-c828g\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:31 crc kubenswrapper[4992]: I1211 09:24:31.838424 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:32 crc kubenswrapper[4992]: W1211 09:24:32.345994 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod798aa5ea_a72c_4792_9196_20eb38c86c81.slice/crio-6b22f9759c35f5935e8ad9a18c0f9da5c9aadf3c9e7d6ee2ba8a954d7e968640 WatchSource:0}: Error finding container 6b22f9759c35f5935e8ad9a18c0f9da5c9aadf3c9e7d6ee2ba8a954d7e968640: Status 404 returned error can't find the container with id 6b22f9759c35f5935e8ad9a18c0f9da5c9aadf3c9e7d6ee2ba8a954d7e968640 Dec 11 09:24:32 crc kubenswrapper[4992]: I1211 09:24:32.347813 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c828g"] Dec 11 09:24:32 crc kubenswrapper[4992]: I1211 09:24:32.668304 4992 generic.go:334] "Generic (PLEG): container finished" podID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerID="e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db" exitCode=0 Dec 11 09:24:32 crc kubenswrapper[4992]: I1211 09:24:32.668359 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c828g" event={"ID":"798aa5ea-a72c-4792-9196-20eb38c86c81","Type":"ContainerDied","Data":"e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db"} Dec 11 09:24:32 crc kubenswrapper[4992]: I1211 09:24:32.668667 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c828g" event={"ID":"798aa5ea-a72c-4792-9196-20eb38c86c81","Type":"ContainerStarted","Data":"6b22f9759c35f5935e8ad9a18c0f9da5c9aadf3c9e7d6ee2ba8a954d7e968640"} Dec 11 09:24:33 crc kubenswrapper[4992]: I1211 09:24:33.682926 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c828g" event={"ID":"798aa5ea-a72c-4792-9196-20eb38c86c81","Type":"ContainerStarted","Data":"f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e"} Dec 11 09:24:34 crc kubenswrapper[4992]: I1211 09:24:34.695099 4992 generic.go:334] "Generic (PLEG): container finished" podID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerID="f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e" exitCode=0 Dec 11 09:24:34 crc kubenswrapper[4992]: I1211 09:24:34.695228 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c828g" event={"ID":"798aa5ea-a72c-4792-9196-20eb38c86c81","Type":"ContainerDied","Data":"f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e"} Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.378389 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.378442 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.378480 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.379213 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.379273 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" gracePeriod=600 Dec 11 09:24:35 crc kubenswrapper[4992]: E1211 09:24:35.522585 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.710815 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c828g" event={"ID":"798aa5ea-a72c-4792-9196-20eb38c86c81","Type":"ContainerStarted","Data":"5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e"} Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.715087 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" exitCode=0 Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.715145 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691"} Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.715189 4992 scope.go:117] "RemoveContainer" containerID="40458e4ef0a84225e9d2bce5983bf5853db92fe0d6f20c0aa5b66108f1866c20" Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.715704 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:24:35 crc kubenswrapper[4992]: E1211 09:24:35.715997 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:24:35 crc kubenswrapper[4992]: I1211 09:24:35.740503 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c828g" podStartSLOduration=2.181212698 podStartE2EDuration="4.740479012s" podCreationTimestamp="2025-12-11 09:24:31 +0000 UTC" firstStartedPulling="2025-12-11 09:24:32.669721942 +0000 UTC m=+3696.929195868" lastFinishedPulling="2025-12-11 09:24:35.228988246 +0000 UTC m=+3699.488462182" observedRunningTime="2025-12-11 09:24:35.731062733 +0000 UTC m=+3699.990536659" watchObservedRunningTime="2025-12-11 09:24:35.740479012 +0000 UTC m=+3699.999952938" Dec 11 09:24:41 crc kubenswrapper[4992]: I1211 09:24:41.839079 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:41 crc kubenswrapper[4992]: I1211 09:24:41.839684 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:41 crc kubenswrapper[4992]: I1211 09:24:41.919732 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:42 crc kubenswrapper[4992]: I1211 09:24:42.834693 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:42 crc kubenswrapper[4992]: I1211 09:24:42.883264 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c828g"] Dec 11 09:24:44 crc kubenswrapper[4992]: I1211 09:24:44.806318 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c828g" podUID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerName="registry-server" containerID="cri-o://5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e" gracePeriod=2 Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.260294 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.319731 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-catalog-content\") pod \"798aa5ea-a72c-4792-9196-20eb38c86c81\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.319981 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbl5\" (UniqueName: \"kubernetes.io/projected/798aa5ea-a72c-4792-9196-20eb38c86c81-kube-api-access-qqbl5\") pod \"798aa5ea-a72c-4792-9196-20eb38c86c81\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.320126 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-utilities\") pod \"798aa5ea-a72c-4792-9196-20eb38c86c81\" (UID: \"798aa5ea-a72c-4792-9196-20eb38c86c81\") " Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.320858 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-utilities" (OuterVolumeSpecName: "utilities") pod "798aa5ea-a72c-4792-9196-20eb38c86c81" (UID: "798aa5ea-a72c-4792-9196-20eb38c86c81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.326267 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/798aa5ea-a72c-4792-9196-20eb38c86c81-kube-api-access-qqbl5" (OuterVolumeSpecName: "kube-api-access-qqbl5") pod "798aa5ea-a72c-4792-9196-20eb38c86c81" (UID: "798aa5ea-a72c-4792-9196-20eb38c86c81"). InnerVolumeSpecName "kube-api-access-qqbl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.374371 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "798aa5ea-a72c-4792-9196-20eb38c86c81" (UID: "798aa5ea-a72c-4792-9196-20eb38c86c81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.422977 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.423002 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqbl5\" (UniqueName: \"kubernetes.io/projected/798aa5ea-a72c-4792-9196-20eb38c86c81-kube-api-access-qqbl5\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.423012 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/798aa5ea-a72c-4792-9196-20eb38c86c81-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.817303 4992 generic.go:334] "Generic (PLEG): container finished" podID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerID="5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e" exitCode=0 Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.817347 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c828g" event={"ID":"798aa5ea-a72c-4792-9196-20eb38c86c81","Type":"ContainerDied","Data":"5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e"} Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.817365 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c828g" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.817383 4992 scope.go:117] "RemoveContainer" containerID="5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.817371 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c828g" event={"ID":"798aa5ea-a72c-4792-9196-20eb38c86c81","Type":"ContainerDied","Data":"6b22f9759c35f5935e8ad9a18c0f9da5c9aadf3c9e7d6ee2ba8a954d7e968640"} Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.844535 4992 scope.go:117] "RemoveContainer" containerID="f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.864347 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c828g"] Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.874619 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c828g"] Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.876783 4992 scope.go:117] "RemoveContainer" containerID="e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.939881 4992 scope.go:117] "RemoveContainer" containerID="5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e" Dec 11 09:24:45 crc kubenswrapper[4992]: E1211 09:24:45.940458 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e\": container with ID starting with 5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e not found: ID does not exist" containerID="5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.940488 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e"} err="failed to get container status \"5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e\": rpc error: code = NotFound desc = could not find container \"5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e\": container with ID starting with 5259279db57dbb89f963dbdbd97583bc93cc75b1b91b6de5881c4709d0141d7e not found: ID does not exist" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.940510 4992 scope.go:117] "RemoveContainer" containerID="f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e" Dec 11 09:24:45 crc kubenswrapper[4992]: E1211 09:24:45.940900 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e\": container with ID starting with f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e not found: ID does not exist" containerID="f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.940951 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e"} err="failed to get container status \"f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e\": rpc error: code = NotFound desc = could not find container \"f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e\": container with ID starting with f6396a3b0d7c9c6c8bc85185c682ffa4ca01983771d07765537092b4d8c05d3e not found: ID does not exist" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.940989 4992 scope.go:117] "RemoveContainer" containerID="e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db" Dec 11 09:24:45 crc kubenswrapper[4992]: E1211 09:24:45.941319 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db\": container with ID starting with e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db not found: ID does not exist" containerID="e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db" Dec 11 09:24:45 crc kubenswrapper[4992]: I1211 09:24:45.941351 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db"} err="failed to get container status \"e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db\": rpc error: code = NotFound desc = could not find container \"e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db\": container with ID starting with e9328439fc5b97fd5864fedfcdb08982a6c61d44967c09de941cef3fbe5411db not found: ID does not exist" Dec 11 09:24:46 crc kubenswrapper[4992]: I1211 09:24:46.130412 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="798aa5ea-a72c-4792-9196-20eb38c86c81" path="/var/lib/kubelet/pods/798aa5ea-a72c-4792-9196-20eb38c86c81/volumes" Dec 11 09:24:51 crc kubenswrapper[4992]: I1211 09:24:51.095815 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:24:51 crc kubenswrapper[4992]: E1211 09:24:51.096705 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.943234 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-knqll/must-gather-mj6l2"] Dec 11 09:24:53 crc kubenswrapper[4992]: E1211 09:24:53.944128 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerName="registry-server" Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.944146 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerName="registry-server" Dec 11 09:24:53 crc kubenswrapper[4992]: E1211 09:24:53.944171 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerName="extract-content" Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.944179 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerName="extract-content" Dec 11 09:24:53 crc kubenswrapper[4992]: E1211 09:24:53.944216 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerName="extract-utilities" Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.944225 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerName="extract-utilities" Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.944472 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="798aa5ea-a72c-4792-9196-20eb38c86c81" containerName="registry-server" Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.945975 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.948742 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-knqll"/"openshift-service-ca.crt" Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.950016 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-knqll"/"default-dockercfg-bbb7b" Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.957194 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-knqll/must-gather-mj6l2"] Dec 11 09:24:53 crc kubenswrapper[4992]: I1211 09:24:53.961032 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-knqll"/"kube-root-ca.crt" Dec 11 09:24:54 crc kubenswrapper[4992]: I1211 09:24:54.092416 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d23c00b-c5f8-4eab-9e65-0765282574eb-must-gather-output\") pod \"must-gather-mj6l2\" (UID: \"5d23c00b-c5f8-4eab-9e65-0765282574eb\") " pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:24:54 crc kubenswrapper[4992]: I1211 09:24:54.092664 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5gr\" (UniqueName: \"kubernetes.io/projected/5d23c00b-c5f8-4eab-9e65-0765282574eb-kube-api-access-bp5gr\") pod \"must-gather-mj6l2\" (UID: \"5d23c00b-c5f8-4eab-9e65-0765282574eb\") " pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:24:54 crc kubenswrapper[4992]: I1211 09:24:54.195195 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d23c00b-c5f8-4eab-9e65-0765282574eb-must-gather-output\") pod \"must-gather-mj6l2\" (UID: \"5d23c00b-c5f8-4eab-9e65-0765282574eb\") " pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:24:54 crc kubenswrapper[4992]: I1211 09:24:54.195307 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5gr\" (UniqueName: \"kubernetes.io/projected/5d23c00b-c5f8-4eab-9e65-0765282574eb-kube-api-access-bp5gr\") pod \"must-gather-mj6l2\" (UID: \"5d23c00b-c5f8-4eab-9e65-0765282574eb\") " pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:24:54 crc kubenswrapper[4992]: I1211 09:24:54.195656 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d23c00b-c5f8-4eab-9e65-0765282574eb-must-gather-output\") pod \"must-gather-mj6l2\" (UID: \"5d23c00b-c5f8-4eab-9e65-0765282574eb\") " pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:24:54 crc kubenswrapper[4992]: I1211 09:24:54.213751 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5gr\" (UniqueName: \"kubernetes.io/projected/5d23c00b-c5f8-4eab-9e65-0765282574eb-kube-api-access-bp5gr\") pod \"must-gather-mj6l2\" (UID: \"5d23c00b-c5f8-4eab-9e65-0765282574eb\") " pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:24:54 crc kubenswrapper[4992]: I1211 09:24:54.264569 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:24:54 crc kubenswrapper[4992]: I1211 09:24:54.757186 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-knqll/must-gather-mj6l2"] Dec 11 09:24:54 crc kubenswrapper[4992]: I1211 09:24:54.901173 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/must-gather-mj6l2" event={"ID":"5d23c00b-c5f8-4eab-9e65-0765282574eb","Type":"ContainerStarted","Data":"2d5bb5abd4b7e71e2cbe4d3fe8840f3c447e0fbdbd2c91914205f67e60cad080"} Dec 11 09:25:03 crc kubenswrapper[4992]: I1211 09:25:03.010044 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/must-gather-mj6l2" event={"ID":"5d23c00b-c5f8-4eab-9e65-0765282574eb","Type":"ContainerStarted","Data":"4d8a24f2135f9281aa8c82ae0f5e3253a59e5ed5ea7d76cfb297a40cca535137"} Dec 11 09:25:03 crc kubenswrapper[4992]: I1211 09:25:03.010715 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/must-gather-mj6l2" event={"ID":"5d23c00b-c5f8-4eab-9e65-0765282574eb","Type":"ContainerStarted","Data":"25da01794a90bd7ed87a8f5cd5e5ad0736516367b36286d63448dc1be2a53ebc"} Dec 11 09:25:03 crc kubenswrapper[4992]: I1211 09:25:03.032390 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-knqll/must-gather-mj6l2" podStartSLOduration=2.457493274 podStartE2EDuration="10.032366908s" podCreationTimestamp="2025-12-11 09:24:53 +0000 UTC" firstStartedPulling="2025-12-11 09:24:54.780889113 +0000 UTC m=+3719.040363039" lastFinishedPulling="2025-12-11 09:25:02.355762747 +0000 UTC m=+3726.615236673" observedRunningTime="2025-12-11 09:25:03.025621113 +0000 UTC m=+3727.285095049" watchObservedRunningTime="2025-12-11 09:25:03.032366908 +0000 UTC m=+3727.291840834" Dec 11 09:25:04 crc kubenswrapper[4992]: I1211 09:25:04.095948 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:25:04 crc kubenswrapper[4992]: E1211 09:25:04.096273 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:25:06 crc kubenswrapper[4992]: I1211 09:25:06.105706 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-knqll/crc-debug-bqjqx"] Dec 11 09:25:06 crc kubenswrapper[4992]: I1211 09:25:06.108283 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:25:06 crc kubenswrapper[4992]: I1211 09:25:06.253400 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44gcp\" (UniqueName: \"kubernetes.io/projected/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-kube-api-access-44gcp\") pod \"crc-debug-bqjqx\" (UID: \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\") " pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:25:06 crc kubenswrapper[4992]: I1211 09:25:06.254040 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-host\") pod \"crc-debug-bqjqx\" (UID: \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\") " pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:25:06 crc kubenswrapper[4992]: I1211 09:25:06.355498 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-host\") pod \"crc-debug-bqjqx\" (UID: \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\") " pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:25:06 crc kubenswrapper[4992]: I1211 09:25:06.355576 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44gcp\" (UniqueName: \"kubernetes.io/projected/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-kube-api-access-44gcp\") pod \"crc-debug-bqjqx\" (UID: \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\") " pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:25:06 crc kubenswrapper[4992]: I1211 09:25:06.355709 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-host\") pod \"crc-debug-bqjqx\" (UID: \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\") " pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:25:06 crc kubenswrapper[4992]: I1211 09:25:06.376084 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44gcp\" (UniqueName: \"kubernetes.io/projected/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-kube-api-access-44gcp\") pod \"crc-debug-bqjqx\" (UID: \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\") " pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:25:06 crc kubenswrapper[4992]: I1211 09:25:06.450599 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:25:06 crc kubenswrapper[4992]: W1211 09:25:06.488283 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded733bb4_7c51_4cb0_80d2_0228fb8469a2.slice/crio-0315cf30a2a0317067233655d136e0a1300c88f36ac6a7bfa1957260c88d250f WatchSource:0}: Error finding container 0315cf30a2a0317067233655d136e0a1300c88f36ac6a7bfa1957260c88d250f: Status 404 returned error can't find the container with id 0315cf30a2a0317067233655d136e0a1300c88f36ac6a7bfa1957260c88d250f Dec 11 09:25:07 crc kubenswrapper[4992]: I1211 09:25:07.052575 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/crc-debug-bqjqx" event={"ID":"ed733bb4-7c51-4cb0-80d2-0228fb8469a2","Type":"ContainerStarted","Data":"0315cf30a2a0317067233655d136e0a1300c88f36ac6a7bfa1957260c88d250f"} Dec 11 09:25:17 crc kubenswrapper[4992]: I1211 09:25:17.094873 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:25:17 crc kubenswrapper[4992]: E1211 09:25:17.095693 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:25:19 crc kubenswrapper[4992]: I1211 09:25:19.220821 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/crc-debug-bqjqx" event={"ID":"ed733bb4-7c51-4cb0-80d2-0228fb8469a2","Type":"ContainerStarted","Data":"5b91f2721143fa74eaa226e1c6e6966fa7c559e97d75fb957ad8bf271ce3dae5"} Dec 11 09:25:19 crc kubenswrapper[4992]: I1211 09:25:19.242505 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-knqll/crc-debug-bqjqx" podStartSLOduration=0.803859905 podStartE2EDuration="13.242486348s" podCreationTimestamp="2025-12-11 09:25:06 +0000 UTC" firstStartedPulling="2025-12-11 09:25:06.491175305 +0000 UTC m=+3730.750649231" lastFinishedPulling="2025-12-11 09:25:18.929801748 +0000 UTC m=+3743.189275674" observedRunningTime="2025-12-11 09:25:19.232050404 +0000 UTC m=+3743.491524350" watchObservedRunningTime="2025-12-11 09:25:19.242486348 +0000 UTC m=+3743.501960274" Dec 11 09:25:28 crc kubenswrapper[4992]: I1211 09:25:28.095489 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:25:28 crc kubenswrapper[4992]: E1211 09:25:28.097056 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:25:42 crc kubenswrapper[4992]: I1211 09:25:42.095211 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:25:42 crc kubenswrapper[4992]: E1211 09:25:42.095989 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:25:53 crc kubenswrapper[4992]: I1211 09:25:53.096470 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:25:53 crc kubenswrapper[4992]: E1211 09:25:53.097223 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:26:01 crc kubenswrapper[4992]: I1211 09:26:01.653163 4992 generic.go:334] "Generic (PLEG): container finished" podID="ed733bb4-7c51-4cb0-80d2-0228fb8469a2" containerID="5b91f2721143fa74eaa226e1c6e6966fa7c559e97d75fb957ad8bf271ce3dae5" exitCode=0 Dec 11 09:26:01 crc kubenswrapper[4992]: I1211 09:26:01.653254 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/crc-debug-bqjqx" event={"ID":"ed733bb4-7c51-4cb0-80d2-0228fb8469a2","Type":"ContainerDied","Data":"5b91f2721143fa74eaa226e1c6e6966fa7c559e97d75fb957ad8bf271ce3dae5"} Dec 11 09:26:02 crc kubenswrapper[4992]: I1211 09:26:02.769833 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:26:02 crc kubenswrapper[4992]: I1211 09:26:02.810891 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-knqll/crc-debug-bqjqx"] Dec 11 09:26:02 crc kubenswrapper[4992]: I1211 09:26:02.821853 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-knqll/crc-debug-bqjqx"] Dec 11 09:26:02 crc kubenswrapper[4992]: I1211 09:26:02.904252 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-host\") pod \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\" (UID: \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\") " Dec 11 09:26:02 crc kubenswrapper[4992]: I1211 09:26:02.904383 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-host" (OuterVolumeSpecName: "host") pod "ed733bb4-7c51-4cb0-80d2-0228fb8469a2" (UID: "ed733bb4-7c51-4cb0-80d2-0228fb8469a2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:26:02 crc kubenswrapper[4992]: I1211 09:26:02.904438 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44gcp\" (UniqueName: \"kubernetes.io/projected/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-kube-api-access-44gcp\") pod \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\" (UID: \"ed733bb4-7c51-4cb0-80d2-0228fb8469a2\") " Dec 11 09:26:02 crc kubenswrapper[4992]: I1211 09:26:02.904871 4992 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-host\") on node \"crc\" DevicePath \"\"" Dec 11 09:26:02 crc kubenswrapper[4992]: I1211 09:26:02.912860 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-kube-api-access-44gcp" (OuterVolumeSpecName: "kube-api-access-44gcp") pod "ed733bb4-7c51-4cb0-80d2-0228fb8469a2" (UID: "ed733bb4-7c51-4cb0-80d2-0228fb8469a2"). InnerVolumeSpecName "kube-api-access-44gcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:26:03 crc kubenswrapper[4992]: I1211 09:26:03.007321 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44gcp\" (UniqueName: \"kubernetes.io/projected/ed733bb4-7c51-4cb0-80d2-0228fb8469a2-kube-api-access-44gcp\") on node \"crc\" DevicePath \"\"" Dec 11 09:26:03 crc kubenswrapper[4992]: I1211 09:26:03.673687 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0315cf30a2a0317067233655d136e0a1300c88f36ac6a7bfa1957260c88d250f" Dec 11 09:26:03 crc kubenswrapper[4992]: I1211 09:26:03.673769 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-bqjqx" Dec 11 09:26:03 crc kubenswrapper[4992]: I1211 09:26:03.959921 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-knqll/crc-debug-8p8bd"] Dec 11 09:26:03 crc kubenswrapper[4992]: E1211 09:26:03.960388 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed733bb4-7c51-4cb0-80d2-0228fb8469a2" containerName="container-00" Dec 11 09:26:03 crc kubenswrapper[4992]: I1211 09:26:03.960403 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed733bb4-7c51-4cb0-80d2-0228fb8469a2" containerName="container-00" Dec 11 09:26:03 crc kubenswrapper[4992]: I1211 09:26:03.960681 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed733bb4-7c51-4cb0-80d2-0228fb8469a2" containerName="container-00" Dec 11 09:26:03 crc kubenswrapper[4992]: I1211 09:26:03.961483 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.026087 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-host\") pod \"crc-debug-8p8bd\" (UID: \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\") " pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.026231 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr875\" (UniqueName: \"kubernetes.io/projected/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-kube-api-access-qr875\") pod \"crc-debug-8p8bd\" (UID: \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\") " pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.094315 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:26:04 crc kubenswrapper[4992]: E1211 09:26:04.094865 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.107833 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed733bb4-7c51-4cb0-80d2-0228fb8469a2" path="/var/lib/kubelet/pods/ed733bb4-7c51-4cb0-80d2-0228fb8469a2/volumes" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.128815 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-host\") pod \"crc-debug-8p8bd\" (UID: \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\") " pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.128904 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-host\") pod \"crc-debug-8p8bd\" (UID: \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\") " pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.129113 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr875\" (UniqueName: \"kubernetes.io/projected/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-kube-api-access-qr875\") pod \"crc-debug-8p8bd\" (UID: \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\") " pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.153897 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr875\" (UniqueName: \"kubernetes.io/projected/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-kube-api-access-qr875\") pod \"crc-debug-8p8bd\" (UID: \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\") " pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.276725 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.683388 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/crc-debug-8p8bd" event={"ID":"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f","Type":"ContainerStarted","Data":"f28fce16447f949b5ec52b814c1c9f1210fac5cec05d46b879f33b246f9fc4f1"} Dec 11 09:26:04 crc kubenswrapper[4992]: I1211 09:26:04.683982 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/crc-debug-8p8bd" event={"ID":"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f","Type":"ContainerStarted","Data":"9417033f75033d5b1054ffb33cc157ce58c80b27153b7ca8e576ea54c8e692b4"} Dec 11 09:26:05 crc kubenswrapper[4992]: I1211 09:26:05.693788 4992 generic.go:334] "Generic (PLEG): container finished" podID="6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f" containerID="f28fce16447f949b5ec52b814c1c9f1210fac5cec05d46b879f33b246f9fc4f1" exitCode=0 Dec 11 09:26:05 crc kubenswrapper[4992]: I1211 09:26:05.693826 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/crc-debug-8p8bd" event={"ID":"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f","Type":"ContainerDied","Data":"f28fce16447f949b5ec52b814c1c9f1210fac5cec05d46b879f33b246f9fc4f1"} Dec 11 09:26:06 crc kubenswrapper[4992]: I1211 09:26:06.133973 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-knqll/crc-debug-8p8bd"] Dec 11 09:26:06 crc kubenswrapper[4992]: I1211 09:26:06.144514 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-knqll/crc-debug-8p8bd"] Dec 11 09:26:06 crc kubenswrapper[4992]: I1211 09:26:06.803956 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:06 crc kubenswrapper[4992]: I1211 09:26:06.879136 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-host\") pod \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\" (UID: \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\") " Dec 11 09:26:06 crc kubenswrapper[4992]: I1211 09:26:06.879274 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr875\" (UniqueName: \"kubernetes.io/projected/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-kube-api-access-qr875\") pod \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\" (UID: \"6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f\") " Dec 11 09:26:06 crc kubenswrapper[4992]: I1211 09:26:06.879283 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-host" (OuterVolumeSpecName: "host") pod "6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f" (UID: "6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:26:06 crc kubenswrapper[4992]: I1211 09:26:06.880101 4992 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-host\") on node \"crc\" DevicePath \"\"" Dec 11 09:26:06 crc kubenswrapper[4992]: I1211 09:26:06.884711 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-kube-api-access-qr875" (OuterVolumeSpecName: "kube-api-access-qr875") pod "6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f" (UID: "6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f"). InnerVolumeSpecName "kube-api-access-qr875". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:26:06 crc kubenswrapper[4992]: I1211 09:26:06.982555 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr875\" (UniqueName: \"kubernetes.io/projected/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f-kube-api-access-qr875\") on node \"crc\" DevicePath \"\"" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.288690 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-knqll/crc-debug-jfzqb"] Dec 11 09:26:07 crc kubenswrapper[4992]: E1211 09:26:07.289192 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f" containerName="container-00" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.289212 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f" containerName="container-00" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.289451 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f" containerName="container-00" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.290177 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.391682 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6c25e256-63e3-4438-af5c-ea9b0c3abcae-host\") pod \"crc-debug-jfzqb\" (UID: \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\") " pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.391867 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lh5p\" (UniqueName: \"kubernetes.io/projected/6c25e256-63e3-4438-af5c-ea9b0c3abcae-kube-api-access-9lh5p\") pod \"crc-debug-jfzqb\" (UID: \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\") " pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.494336 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6c25e256-63e3-4438-af5c-ea9b0c3abcae-host\") pod \"crc-debug-jfzqb\" (UID: \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\") " pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.494516 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lh5p\" (UniqueName: \"kubernetes.io/projected/6c25e256-63e3-4438-af5c-ea9b0c3abcae-kube-api-access-9lh5p\") pod \"crc-debug-jfzqb\" (UID: \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\") " pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.494550 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6c25e256-63e3-4438-af5c-ea9b0c3abcae-host\") pod \"crc-debug-jfzqb\" (UID: \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\") " pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.520540 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lh5p\" (UniqueName: \"kubernetes.io/projected/6c25e256-63e3-4438-af5c-ea9b0c3abcae-kube-api-access-9lh5p\") pod \"crc-debug-jfzqb\" (UID: \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\") " pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.607271 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:07 crc kubenswrapper[4992]: W1211 09:26:07.639077 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c25e256_63e3_4438_af5c_ea9b0c3abcae.slice/crio-643a58772e9a8b99058ef7d9a8457f058c739a8053fe0d3a987eb3c3f52a35c1 WatchSource:0}: Error finding container 643a58772e9a8b99058ef7d9a8457f058c739a8053fe0d3a987eb3c3f52a35c1: Status 404 returned error can't find the container with id 643a58772e9a8b99058ef7d9a8457f058c739a8053fe0d3a987eb3c3f52a35c1 Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.720386 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9417033f75033d5b1054ffb33cc157ce58c80b27153b7ca8e576ea54c8e692b4" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.720466 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-8p8bd" Dec 11 09:26:07 crc kubenswrapper[4992]: I1211 09:26:07.724545 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/crc-debug-jfzqb" event={"ID":"6c25e256-63e3-4438-af5c-ea9b0c3abcae","Type":"ContainerStarted","Data":"643a58772e9a8b99058ef7d9a8457f058c739a8053fe0d3a987eb3c3f52a35c1"} Dec 11 09:26:08 crc kubenswrapper[4992]: I1211 09:26:08.108744 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f" path="/var/lib/kubelet/pods/6e02efbf-25b3-4b7d-8b21-4aae7bbc8f0f/volumes" Dec 11 09:26:08 crc kubenswrapper[4992]: I1211 09:26:08.735246 4992 generic.go:334] "Generic (PLEG): container finished" podID="6c25e256-63e3-4438-af5c-ea9b0c3abcae" containerID="95b348128566053641884c0238eaa8e75c68cb5d15bcd10a9cc0bf4e9fa4e8b0" exitCode=0 Dec 11 09:26:08 crc kubenswrapper[4992]: I1211 09:26:08.735273 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/crc-debug-jfzqb" event={"ID":"6c25e256-63e3-4438-af5c-ea9b0c3abcae","Type":"ContainerDied","Data":"95b348128566053641884c0238eaa8e75c68cb5d15bcd10a9cc0bf4e9fa4e8b0"} Dec 11 09:26:08 crc kubenswrapper[4992]: I1211 09:26:08.773930 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-knqll/crc-debug-jfzqb"] Dec 11 09:26:08 crc kubenswrapper[4992]: I1211 09:26:08.784479 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-knqll/crc-debug-jfzqb"] Dec 11 09:26:09 crc kubenswrapper[4992]: I1211 09:26:09.870846 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:09 crc kubenswrapper[4992]: I1211 09:26:09.937380 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6c25e256-63e3-4438-af5c-ea9b0c3abcae-host\") pod \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\" (UID: \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\") " Dec 11 09:26:09 crc kubenswrapper[4992]: I1211 09:26:09.937506 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c25e256-63e3-4438-af5c-ea9b0c3abcae-host" (OuterVolumeSpecName: "host") pod "6c25e256-63e3-4438-af5c-ea9b0c3abcae" (UID: "6c25e256-63e3-4438-af5c-ea9b0c3abcae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:26:09 crc kubenswrapper[4992]: I1211 09:26:09.937723 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lh5p\" (UniqueName: \"kubernetes.io/projected/6c25e256-63e3-4438-af5c-ea9b0c3abcae-kube-api-access-9lh5p\") pod \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\" (UID: \"6c25e256-63e3-4438-af5c-ea9b0c3abcae\") " Dec 11 09:26:09 crc kubenswrapper[4992]: I1211 09:26:09.938165 4992 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6c25e256-63e3-4438-af5c-ea9b0c3abcae-host\") on node \"crc\" DevicePath \"\"" Dec 11 09:26:09 crc kubenswrapper[4992]: I1211 09:26:09.949861 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c25e256-63e3-4438-af5c-ea9b0c3abcae-kube-api-access-9lh5p" (OuterVolumeSpecName: "kube-api-access-9lh5p") pod "6c25e256-63e3-4438-af5c-ea9b0c3abcae" (UID: "6c25e256-63e3-4438-af5c-ea9b0c3abcae"). InnerVolumeSpecName "kube-api-access-9lh5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:26:10 crc kubenswrapper[4992]: I1211 09:26:10.039349 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lh5p\" (UniqueName: \"kubernetes.io/projected/6c25e256-63e3-4438-af5c-ea9b0c3abcae-kube-api-access-9lh5p\") on node \"crc\" DevicePath \"\"" Dec 11 09:26:10 crc kubenswrapper[4992]: I1211 09:26:10.106459 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c25e256-63e3-4438-af5c-ea9b0c3abcae" path="/var/lib/kubelet/pods/6c25e256-63e3-4438-af5c-ea9b0c3abcae/volumes" Dec 11 09:26:10 crc kubenswrapper[4992]: I1211 09:26:10.756514 4992 scope.go:117] "RemoveContainer" containerID="95b348128566053641884c0238eaa8e75c68cb5d15bcd10a9cc0bf4e9fa4e8b0" Dec 11 09:26:10 crc kubenswrapper[4992]: I1211 09:26:10.756957 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/crc-debug-jfzqb" Dec 11 09:26:17 crc kubenswrapper[4992]: I1211 09:26:17.095142 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:26:17 crc kubenswrapper[4992]: E1211 09:26:17.095882 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:26:23 crc kubenswrapper[4992]: I1211 09:26:23.228522 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-859586f498-26phb_4282024f-9d71-4b55-aa65-b0a91e76da62/barbican-api/0.log" Dec 11 09:26:23 crc kubenswrapper[4992]: I1211 09:26:23.383523 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-859586f498-26phb_4282024f-9d71-4b55-aa65-b0a91e76da62/barbican-api-log/0.log" Dec 11 09:26:23 crc kubenswrapper[4992]: I1211 09:26:23.462826 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67cb46677b-w6zfw_a69c55bb-ed74-4b63-a8b6-713b08b1dcb4/barbican-keystone-listener/0.log" Dec 11 09:26:23 crc kubenswrapper[4992]: I1211 09:26:23.465337 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67cb46677b-w6zfw_a69c55bb-ed74-4b63-a8b6-713b08b1dcb4/barbican-keystone-listener-log/0.log" Dec 11 09:26:23 crc kubenswrapper[4992]: I1211 09:26:23.634200 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76f54b9b99-lv6z6_8cdf31db-16c4-4bfc-bb50-27a283b61abd/barbican-worker-log/0.log" Dec 11 09:26:23 crc kubenswrapper[4992]: I1211 09:26:23.637410 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76f54b9b99-lv6z6_8cdf31db-16c4-4bfc-bb50-27a283b61abd/barbican-worker/0.log" Dec 11 09:26:23 crc kubenswrapper[4992]: I1211 09:26:23.846300 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c569a72-7d96-4212-b681-f0d5a0c19c61/ceilometer-central-agent/0.log" Dec 11 09:26:23 crc kubenswrapper[4992]: I1211 09:26:23.850488 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7_6066a587-fdc9-4ae8-ad82-4ddf1844f9e6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:23 crc kubenswrapper[4992]: I1211 09:26:23.914164 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c569a72-7d96-4212-b681-f0d5a0c19c61/ceilometer-notification-agent/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.021165 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c569a72-7d96-4212-b681-f0d5a0c19c61/sg-core/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.023170 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c569a72-7d96-4212-b681-f0d5a0c19c61/proxy-httpd/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.141262 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bca5ca17-107c-4c9b-8901-dcf3f962e927/cinder-api/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.210937 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bca5ca17-107c-4c9b-8901-dcf3f962e927/cinder-api-log/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.336242 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e65777ee-c1c8-48f5-a103-539738e7c293/cinder-scheduler/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.356877 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e65777ee-c1c8-48f5-a103-539738e7c293/probe/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.479550 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-scqpq_8f3e0555-dd40-4a68-bcd1-df6fb4be45df/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.589614 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mxttm_e80e1d51-3960-4957-95e0-987fc9b78120/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.672733 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-cpgkh_7e8bbdf3-3509-4a6d-a1c4-decafb575016/init/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.888122 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-cpgkh_7e8bbdf3-3509-4a6d-a1c4-decafb575016/init/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.943292 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-cpgkh_7e8bbdf3-3509-4a6d-a1c4-decafb575016/dnsmasq-dns/0.log" Dec 11 09:26:24 crc kubenswrapper[4992]: I1211 09:26:24.983082 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bswcc_6af1f9e3-e349-40a7-8985-f114c1c808b3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:25 crc kubenswrapper[4992]: I1211 09:26:25.136443 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_112fb236-1ef9-4991-b83c-91c1081483fc/glance-log/0.log" Dec 11 09:26:25 crc kubenswrapper[4992]: I1211 09:26:25.148898 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_112fb236-1ef9-4991-b83c-91c1081483fc/glance-httpd/0.log" Dec 11 09:26:25 crc kubenswrapper[4992]: I1211 09:26:25.308879 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b8264506-1cab-488a-903d-43a6062db6ae/glance-httpd/0.log" Dec 11 09:26:25 crc kubenswrapper[4992]: I1211 09:26:25.329315 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b8264506-1cab-488a-903d-43a6062db6ae/glance-log/0.log" Dec 11 09:26:25 crc kubenswrapper[4992]: I1211 09:26:25.549598 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c6ddf9d4-c2dtv_1d82648a-9f40-4a60-8532-ec3617de1f45/horizon/0.log" Dec 11 09:26:25 crc kubenswrapper[4992]: I1211 09:26:25.608905 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-59bbf_7243a4bc-1d82-40f0-b28f-f6a181d3771b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:25 crc kubenswrapper[4992]: I1211 09:26:25.820323 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-v4q2c_865a4175-ac8e-43c9-ab29-824386311e22/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:25 crc kubenswrapper[4992]: I1211 09:26:25.887727 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c6ddf9d4-c2dtv_1d82648a-9f40-4a60-8532-ec3617de1f45/horizon-log/0.log" Dec 11 09:26:26 crc kubenswrapper[4992]: I1211 09:26:26.079082 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29424061-lk7db_7e48de49-fca3-4449-876a-2fafff903b2e/keystone-cron/0.log" Dec 11 09:26:26 crc kubenswrapper[4992]: I1211 09:26:26.132162 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7ccd8c54fd-6rk8g_10274b54-502d-49df-a610-a6b7cddcce42/keystone-api/0.log" Dec 11 09:26:26 crc kubenswrapper[4992]: I1211 09:26:26.304711 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c46c3920-de00-4d05-9a50-406b7efd3b8d/kube-state-metrics/0.log" Dec 11 09:26:26 crc kubenswrapper[4992]: I1211 09:26:26.332699 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-r5lln_c8aeb03b-f704-4b27-8eb5-afeac15bcd18/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:26 crc kubenswrapper[4992]: I1211 09:26:26.718176 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7978c485bf-hpg7n_04b4ce41-af3f-42d1-a340-e3d20519f217/neutron-api/0.log" Dec 11 09:26:26 crc kubenswrapper[4992]: I1211 09:26:26.788036 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7978c485bf-hpg7n_04b4ce41-af3f-42d1-a340-e3d20519f217/neutron-httpd/0.log" Dec 11 09:26:26 crc kubenswrapper[4992]: I1211 09:26:26.949469 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml_dd42fab7-63a0-4b66-8264-335d337ec7b3/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:27 crc kubenswrapper[4992]: I1211 09:26:27.447303 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dcb13d3b-6f5a-432f-a32f-80fbf81c6adf/nova-cell0-conductor-conductor/0.log" Dec 11 09:26:27 crc kubenswrapper[4992]: I1211 09:26:27.501077 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_baafc0d4-8327-40d2-a00b-27c7388b64bf/nova-api-log/0.log" Dec 11 09:26:27 crc kubenswrapper[4992]: I1211 09:26:27.593409 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_baafc0d4-8327-40d2-a00b-27c7388b64bf/nova-api-api/0.log" Dec 11 09:26:27 crc kubenswrapper[4992]: I1211 09:26:27.737729 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc/nova-cell1-conductor-conductor/0.log" Dec 11 09:26:27 crc kubenswrapper[4992]: I1211 09:26:27.893804 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_34747d0d-221e-453b-9685-2e0ce24f21ff/nova-cell1-novncproxy-novncproxy/0.log" Dec 11 09:26:28 crc kubenswrapper[4992]: I1211 09:26:28.216518 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gnrls_b236958b-e08b-46ec-9e79-772bcb3d6d14/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:28 crc kubenswrapper[4992]: I1211 09:26:28.356586 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e/nova-metadata-log/0.log" Dec 11 09:26:28 crc kubenswrapper[4992]: I1211 09:26:28.546084 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c5eb79c-8f1c-4416-ab38-00b67e0b3f86/mysql-bootstrap/0.log" Dec 11 09:26:28 crc kubenswrapper[4992]: I1211 09:26:28.596111 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_5cd2254a-360d-4e10-8185-12ef58a09c9b/nova-scheduler-scheduler/0.log" Dec 11 09:26:28 crc kubenswrapper[4992]: I1211 09:26:28.793296 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c5eb79c-8f1c-4416-ab38-00b67e0b3f86/galera/0.log" Dec 11 09:26:28 crc kubenswrapper[4992]: I1211 09:26:28.793864 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c5eb79c-8f1c-4416-ab38-00b67e0b3f86/mysql-bootstrap/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.019022 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67/mysql-bootstrap/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.176090 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67/mysql-bootstrap/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.237016 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67/galera/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.347223 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_421fdf51-5a39-4d80-b066-a715006c2f85/openstackclient/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.464096 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-djp6h_43b8eb34-f000-49af-bcf9-7507f85afd2b/ovn-controller/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.509200 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e/nova-metadata-metadata/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.704415 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sw28r_9698b65a-4246-466e-aac8-e7fe29c4063d/ovsdb-server-init/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.776394 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qw2v9_25c315d3-3609-4a88-bf95-4beedb848ecf/openstack-network-exporter/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.924343 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sw28r_9698b65a-4246-466e-aac8-e7fe29c4063d/ovsdb-server-init/0.log" Dec 11 09:26:29 crc kubenswrapper[4992]: I1211 09:26:29.961304 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sw28r_9698b65a-4246-466e-aac8-e7fe29c4063d/ovs-vswitchd/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.026825 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sw28r_9698b65a-4246-466e-aac8-e7fe29c4063d/ovsdb-server/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.094805 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:26:30 crc kubenswrapper[4992]: E1211 09:26:30.095220 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.214294 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-slpxr_a47b817b-7906-4327-ba35-740815f4c02c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.249244 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5b008aff-f3e4-46b6-a5ff-52e0d80374d8/openstack-network-exporter/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.309514 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5b008aff-f3e4-46b6-a5ff-52e0d80374d8/ovn-northd/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.432526 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c51bf698-2728-4a49-b7e1-d80c304725e2/ovsdbserver-nb/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.444677 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c51bf698-2728-4a49-b7e1-d80c304725e2/openstack-network-exporter/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.684149 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a84aae8d-da28-42b4-80a4-99e157fb57ec/openstack-network-exporter/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.719131 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a84aae8d-da28-42b4-80a4-99e157fb57ec/ovsdbserver-sb/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.911982 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74db564d44-wj6gh_1215b406-66dc-4132-a0ea-76010ee7b44d/placement-api/0.log" Dec 11 09:26:30 crc kubenswrapper[4992]: I1211 09:26:30.998171 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b10485db-da0e-493a-ad33-82634346be84/setup-container/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.009479 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74db564d44-wj6gh_1215b406-66dc-4132-a0ea-76010ee7b44d/placement-log/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.144796 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b10485db-da0e-493a-ad33-82634346be84/setup-container/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.200806 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_614cd874-917b-4851-b702-cfb170fcec4d/setup-container/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.258556 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b10485db-da0e-493a-ad33-82634346be84/rabbitmq/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.433791 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_614cd874-917b-4851-b702-cfb170fcec4d/setup-container/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.460052 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_614cd874-917b-4851-b702-cfb170fcec4d/rabbitmq/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.612113 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv_57455273-3fb5-408e-a80c-c42880a6b0bf/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.696162 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-nwx9k_ff4798f8-6563-4d95-ab98-252c0417f16f/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.846416 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n_540586d6-4da9-4c8e-9866-dbe51de9f643/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:31 crc kubenswrapper[4992]: I1211 09:26:31.929879 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bb5gt_daffe0c7-1479-4565-9035-46e508469995/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:32 crc kubenswrapper[4992]: I1211 09:26:32.439004 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qd5rv_b8616302-c54e-49e1-98cb-924b70e8050f/ssh-known-hosts-edpm-deployment/0.log" Dec 11 09:26:32 crc kubenswrapper[4992]: I1211 09:26:32.642563 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d94866685-kpw9g_f10bd5d3-8ab6-4950-96e9-b683e47619ea/proxy-httpd/0.log" Dec 11 09:26:32 crc kubenswrapper[4992]: I1211 09:26:32.681191 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d94866685-kpw9g_f10bd5d3-8ab6-4950-96e9-b683e47619ea/proxy-server/0.log" Dec 11 09:26:32 crc kubenswrapper[4992]: I1211 09:26:32.761505 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dpvlt_95883dfb-ad1a-4d13-889e-4b9f73ded332/swift-ring-rebalance/0.log" Dec 11 09:26:32 crc kubenswrapper[4992]: I1211 09:26:32.911508 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/account-auditor/0.log" Dec 11 09:26:32 crc kubenswrapper[4992]: I1211 09:26:32.990237 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/account-reaper/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.032437 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/account-replicator/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.128990 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/account-server/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.158504 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/container-auditor/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.238123 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/container-server/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.252246 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/container-replicator/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.334255 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/container-updater/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.381078 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-auditor/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.442604 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-expirer/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.492351 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-replicator/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.615197 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-updater/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.626483 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-server/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.694024 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/rsync/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.755923 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/swift-recon-cron/0.log" Dec 11 09:26:33 crc kubenswrapper[4992]: I1211 09:26:33.899824 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m_35d02426-6179-4c35-8e14-f1e06f6684f6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:34 crc kubenswrapper[4992]: I1211 09:26:34.030027 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_79d2a033-d073-439d-8d2c-779b95da30f4/tempest-tests-tempest-tests-runner/0.log" Dec 11 09:26:34 crc kubenswrapper[4992]: I1211 09:26:34.669744 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8_f5fefa11-ffb3-491d-90e5-c957a37896ef/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:26:34 crc kubenswrapper[4992]: I1211 09:26:34.714337 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_216b8df4-225b-4a21-abf4-23a79bab418a/test-operator-logs-container/0.log" Dec 11 09:26:44 crc kubenswrapper[4992]: I1211 09:26:44.094714 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:26:44 crc kubenswrapper[4992]: E1211 09:26:44.095408 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:26:44 crc kubenswrapper[4992]: I1211 09:26:44.232873 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_a9e5a806-cf0a-4149-81d7-803170a48b0e/memcached/0.log" Dec 11 09:26:57 crc kubenswrapper[4992]: I1211 09:26:57.095029 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:26:57 crc kubenswrapper[4992]: E1211 09:26:57.095811 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.148675 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/util/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.313915 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/util/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.325655 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/pull/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.382293 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/pull/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.517507 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/extract/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.552504 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/pull/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.564697 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/util/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.705264 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-l5mc5_a84e6e65-9f83-405a-a478-a53e125d5845/kube-rbac-proxy/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.854234 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-l5mc5_a84e6e65-9f83-405a-a478-a53e125d5845/manager/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.889327 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-55h47_07859dd8-8995-4214-8ee9-6648fa5a292e/kube-rbac-proxy/0.log" Dec 11 09:27:00 crc kubenswrapper[4992]: I1211 09:27:00.949752 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-55h47_07859dd8-8995-4214-8ee9-6648fa5a292e/manager/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.051505 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-6gl95_a39ad598-19ba-42cb-9f35-538b68de7b04/kube-rbac-proxy/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.121288 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-6gl95_a39ad598-19ba-42cb-9f35-538b68de7b04/manager/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.207930 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-bdp4x_42106600-d00d-477a-aaec-102ba03cb5c6/kube-rbac-proxy/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.309988 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-bdp4x_42106600-d00d-477a-aaec-102ba03cb5c6/manager/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.387334 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-hsldx_e501b125-ca5e-41f0-88c1-a9fda63de236/kube-rbac-proxy/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.426395 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-hsldx_e501b125-ca5e-41f0-88c1-a9fda63de236/manager/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.595936 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-jqlq7_b4d8b09b-a162-43bd-a91f-dc87e5c9c956/kube-rbac-proxy/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.628655 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-jqlq7_b4d8b09b-a162-43bd-a91f-dc87e5c9c956/manager/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.800976 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-hd9fc_fc892dae-199a-49ca-8ddd-863a6b8426d7/kube-rbac-proxy/0.log" Dec 11 09:27:01 crc kubenswrapper[4992]: I1211 09:27:01.893369 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-r792s_2a91887f-977b-43dd-b638-0391348bf5d7/kube-rbac-proxy/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.035204 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-hd9fc_fc892dae-199a-49ca-8ddd-863a6b8426d7/manager/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.060319 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-r792s_2a91887f-977b-43dd-b638-0391348bf5d7/manager/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.118027 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-l6gnl_74f7d667-67f0-459b-a7a0-f46c0e095485/kube-rbac-proxy/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.310551 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-l6gnl_74f7d667-67f0-459b-a7a0-f46c0e095485/manager/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.359209 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-2rs5h_995a7c64-c843-4200-b1cf-9fe6d774f457/manager/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.412450 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-2rs5h_995a7c64-c843-4200-b1cf-9fe6d774f457/kube-rbac-proxy/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.516560 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-7nw5b_2b8e6bee-2aae-4689-898a-b298fd5a3d00/kube-rbac-proxy/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.561027 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-7nw5b_2b8e6bee-2aae-4689-898a-b298fd5a3d00/manager/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.710006 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-rd8j7_aa6fcfad-b39a-4621-aebe-0b48a4106495/kube-rbac-proxy/0.log" Dec 11 09:27:02 crc kubenswrapper[4992]: I1211 09:27:02.754664 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-rd8j7_aa6fcfad-b39a-4621-aebe-0b48a4106495/manager/0.log" Dec 11 09:27:03 crc kubenswrapper[4992]: I1211 09:27:03.028029 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9r2v8_22b393e2-e34e-4f47-a8f8-136d9a6613f6/kube-rbac-proxy/0.log" Dec 11 09:27:03 crc kubenswrapper[4992]: I1211 09:27:03.144709 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-r9hft_94ff8875-2a35-47c0-8da4-1fcc4fd0836e/kube-rbac-proxy/0.log" Dec 11 09:27:03 crc kubenswrapper[4992]: I1211 09:27:03.164846 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9r2v8_22b393e2-e34e-4f47-a8f8-136d9a6613f6/manager/0.log" Dec 11 09:27:03 crc kubenswrapper[4992]: I1211 09:27:03.262334 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-r9hft_94ff8875-2a35-47c0-8da4-1fcc4fd0836e/manager/0.log" Dec 11 09:27:03 crc kubenswrapper[4992]: I1211 09:27:03.358309 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f5wrjt_24ddc127-1ac3-4dd9-ae14-c133c9ad387b/kube-rbac-proxy/0.log" Dec 11 09:27:03 crc kubenswrapper[4992]: I1211 09:27:03.448161 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f5wrjt_24ddc127-1ac3-4dd9-ae14-c133c9ad387b/manager/0.log" Dec 11 09:27:03 crc kubenswrapper[4992]: I1211 09:27:03.734197 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-94cdf5849-dv4jk_442cb00a-6225-47e0-a88d-6d615414e5a4/operator/0.log" Dec 11 09:27:03 crc kubenswrapper[4992]: I1211 09:27:03.790903 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gzhqj_36fe56e4-3db8-4d2a-8fe4-bfac398f3d92/registry-server/0.log" Dec 11 09:27:03 crc kubenswrapper[4992]: I1211 09:27:03.976313 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-4lp9k_2b843345-399a-41e3-abe0-f7f41682250a/kube-rbac-proxy/0.log" Dec 11 09:27:04 crc kubenswrapper[4992]: I1211 09:27:04.017488 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-4lp9k_2b843345-399a-41e3-abe0-f7f41682250a/manager/0.log" Dec 11 09:27:04 crc kubenswrapper[4992]: I1211 09:27:04.474270 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-8q2xn_d708dd00-6c6a-4dd0-ac04-e0b57a753f1f/kube-rbac-proxy/0.log" Dec 11 09:27:04 crc kubenswrapper[4992]: I1211 09:27:04.532604 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-8q2xn_d708dd00-6c6a-4dd0-ac04-e0b57a753f1f/manager/0.log" Dec 11 09:27:04 crc kubenswrapper[4992]: I1211 09:27:04.647678 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kxgst_5c6deb1d-64a1-4f75-baaf-3ce6c908b850/operator/0.log" Dec 11 09:27:04 crc kubenswrapper[4992]: I1211 09:27:04.687130 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-d9jmh_ae97f467-cfd0-46c1-a261-36f09387f3e0/kube-rbac-proxy/0.log" Dec 11 09:27:04 crc kubenswrapper[4992]: I1211 09:27:04.822397 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-d9jmh_ae97f467-cfd0-46c1-a261-36f09387f3e0/manager/0.log" Dec 11 09:27:04 crc kubenswrapper[4992]: I1211 09:27:04.865422 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-l54jq_2e7b36cb-508f-46e0-acd1-6eca36c331b1/kube-rbac-proxy/0.log" Dec 11 09:27:05 crc kubenswrapper[4992]: I1211 09:27:05.059727 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-g9552_d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a/manager/0.log" Dec 11 09:27:05 crc kubenswrapper[4992]: I1211 09:27:05.065854 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-g9552_d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a/kube-rbac-proxy/0.log" Dec 11 09:27:05 crc kubenswrapper[4992]: I1211 09:27:05.087391 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-l54jq_2e7b36cb-508f-46e0-acd1-6eca36c331b1/manager/0.log" Dec 11 09:27:05 crc kubenswrapper[4992]: I1211 09:27:05.228173 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rph2p_3635faed-4894-4eb8-94f7-33b055b860c4/kube-rbac-proxy/0.log" Dec 11 09:27:05 crc kubenswrapper[4992]: I1211 09:27:05.305905 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rph2p_3635faed-4894-4eb8-94f7-33b055b860c4/manager/0.log" Dec 11 09:27:05 crc kubenswrapper[4992]: I1211 09:27:05.785703 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5dfd9f965d-t689z_4c6d188e-81e4-4ba9-a555-5dbda4f39d1d/manager/0.log" Dec 11 09:27:09 crc kubenswrapper[4992]: I1211 09:27:09.095039 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:27:09 crc kubenswrapper[4992]: E1211 09:27:09.095926 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:27:23 crc kubenswrapper[4992]: I1211 09:27:23.095929 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:27:23 crc kubenswrapper[4992]: E1211 09:27:23.096759 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:27:24 crc kubenswrapper[4992]: I1211 09:27:24.310808 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9jmlp_f495c66f-c76e-41c7-a70b-71c7a19c8c6a/control-plane-machine-set-operator/0.log" Dec 11 09:27:24 crc kubenswrapper[4992]: I1211 09:27:24.523141 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g5d6r_7f77b180-f28c-472b-a577-44ef5012100c/machine-api-operator/0.log" Dec 11 09:27:24 crc kubenswrapper[4992]: I1211 09:27:24.613617 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g5d6r_7f77b180-f28c-472b-a577-44ef5012100c/kube-rbac-proxy/0.log" Dec 11 09:27:29 crc kubenswrapper[4992]: I1211 09:27:29.887071 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x4kfw"] Dec 11 09:27:29 crc kubenswrapper[4992]: E1211 09:27:29.888000 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c25e256-63e3-4438-af5c-ea9b0c3abcae" containerName="container-00" Dec 11 09:27:29 crc kubenswrapper[4992]: I1211 09:27:29.888012 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c25e256-63e3-4438-af5c-ea9b0c3abcae" containerName="container-00" Dec 11 09:27:29 crc kubenswrapper[4992]: I1211 09:27:29.888197 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c25e256-63e3-4438-af5c-ea9b0c3abcae" containerName="container-00" Dec 11 09:27:29 crc kubenswrapper[4992]: I1211 09:27:29.889446 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:29 crc kubenswrapper[4992]: I1211 09:27:29.904545 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4kfw"] Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.041249 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-utilities\") pod \"redhat-marketplace-x4kfw\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.041304 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd5pj\" (UniqueName: \"kubernetes.io/projected/f08ff940-289f-47d9-ad55-1cce5fddb6a5-kube-api-access-sd5pj\") pod \"redhat-marketplace-x4kfw\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.041452 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-catalog-content\") pod \"redhat-marketplace-x4kfw\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.143081 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-utilities\") pod \"redhat-marketplace-x4kfw\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.143137 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd5pj\" (UniqueName: \"kubernetes.io/projected/f08ff940-289f-47d9-ad55-1cce5fddb6a5-kube-api-access-sd5pj\") pod \"redhat-marketplace-x4kfw\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.143219 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-catalog-content\") pod \"redhat-marketplace-x4kfw\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.143779 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-catalog-content\") pod \"redhat-marketplace-x4kfw\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.143782 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-utilities\") pod \"redhat-marketplace-x4kfw\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.162553 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd5pj\" (UniqueName: \"kubernetes.io/projected/f08ff940-289f-47d9-ad55-1cce5fddb6a5-kube-api-access-sd5pj\") pod \"redhat-marketplace-x4kfw\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.212252 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:30 crc kubenswrapper[4992]: I1211 09:27:30.714355 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4kfw"] Dec 11 09:27:31 crc kubenswrapper[4992]: I1211 09:27:31.561680 4992 generic.go:334] "Generic (PLEG): container finished" podID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerID="5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7" exitCode=0 Dec 11 09:27:31 crc kubenswrapper[4992]: I1211 09:27:31.561998 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4kfw" event={"ID":"f08ff940-289f-47d9-ad55-1cce5fddb6a5","Type":"ContainerDied","Data":"5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7"} Dec 11 09:27:31 crc kubenswrapper[4992]: I1211 09:27:31.562029 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4kfw" event={"ID":"f08ff940-289f-47d9-ad55-1cce5fddb6a5","Type":"ContainerStarted","Data":"3de4b789f885f82423b53caf73f2bb1f424cab259a9a512862687005e983666e"} Dec 11 09:27:33 crc kubenswrapper[4992]: I1211 09:27:33.579133 4992 generic.go:334] "Generic (PLEG): container finished" podID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerID="7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e" exitCode=0 Dec 11 09:27:33 crc kubenswrapper[4992]: I1211 09:27:33.579215 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4kfw" event={"ID":"f08ff940-289f-47d9-ad55-1cce5fddb6a5","Type":"ContainerDied","Data":"7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e"} Dec 11 09:27:34 crc kubenswrapper[4992]: I1211 09:27:34.593194 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4kfw" event={"ID":"f08ff940-289f-47d9-ad55-1cce5fddb6a5","Type":"ContainerStarted","Data":"768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa"} Dec 11 09:27:34 crc kubenswrapper[4992]: I1211 09:27:34.619998 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x4kfw" podStartSLOduration=2.991542662 podStartE2EDuration="5.619972831s" podCreationTimestamp="2025-12-11 09:27:29 +0000 UTC" firstStartedPulling="2025-12-11 09:27:31.564247987 +0000 UTC m=+3875.823721923" lastFinishedPulling="2025-12-11 09:27:34.192678166 +0000 UTC m=+3878.452152092" observedRunningTime="2025-12-11 09:27:34.614763384 +0000 UTC m=+3878.874237310" watchObservedRunningTime="2025-12-11 09:27:34.619972831 +0000 UTC m=+3878.879446757" Dec 11 09:27:35 crc kubenswrapper[4992]: I1211 09:27:35.094924 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:27:35 crc kubenswrapper[4992]: E1211 09:27:35.095208 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:27:37 crc kubenswrapper[4992]: I1211 09:27:37.973320 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-wz2kx_02923a4e-c47d-47f9-8a4c-389310df14cb/cert-manager-controller/0.log" Dec 11 09:27:38 crc kubenswrapper[4992]: I1211 09:27:38.167252 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-xsswz_b8d49b68-c215-4f0e-a508-043a98247366/cert-manager-cainjector/0.log" Dec 11 09:27:38 crc kubenswrapper[4992]: I1211 09:27:38.254154 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-f6zvc_f92a18ba-a108-476e-a4f6-d7f4446b860a/cert-manager-webhook/0.log" Dec 11 09:27:40 crc kubenswrapper[4992]: I1211 09:27:40.212571 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:40 crc kubenswrapper[4992]: I1211 09:27:40.212954 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:40 crc kubenswrapper[4992]: I1211 09:27:40.259902 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:40 crc kubenswrapper[4992]: I1211 09:27:40.739748 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:40 crc kubenswrapper[4992]: I1211 09:27:40.786884 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4kfw"] Dec 11 09:27:42 crc kubenswrapper[4992]: I1211 09:27:42.660601 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x4kfw" podUID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerName="registry-server" containerID="cri-o://768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa" gracePeriod=2 Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.140326 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.302473 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd5pj\" (UniqueName: \"kubernetes.io/projected/f08ff940-289f-47d9-ad55-1cce5fddb6a5-kube-api-access-sd5pj\") pod \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.302529 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-utilities\") pod \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.302701 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-catalog-content\") pod \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\" (UID: \"f08ff940-289f-47d9-ad55-1cce5fddb6a5\") " Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.303585 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-utilities" (OuterVolumeSpecName: "utilities") pod "f08ff940-289f-47d9-ad55-1cce5fddb6a5" (UID: "f08ff940-289f-47d9-ad55-1cce5fddb6a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.311831 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f08ff940-289f-47d9-ad55-1cce5fddb6a5-kube-api-access-sd5pj" (OuterVolumeSpecName: "kube-api-access-sd5pj") pod "f08ff940-289f-47d9-ad55-1cce5fddb6a5" (UID: "f08ff940-289f-47d9-ad55-1cce5fddb6a5"). InnerVolumeSpecName "kube-api-access-sd5pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.328090 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f08ff940-289f-47d9-ad55-1cce5fddb6a5" (UID: "f08ff940-289f-47d9-ad55-1cce5fddb6a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.405539 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.405576 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd5pj\" (UniqueName: \"kubernetes.io/projected/f08ff940-289f-47d9-ad55-1cce5fddb6a5-kube-api-access-sd5pj\") on node \"crc\" DevicePath \"\"" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.405589 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f08ff940-289f-47d9-ad55-1cce5fddb6a5-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.672592 4992 generic.go:334] "Generic (PLEG): container finished" podID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerID="768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa" exitCode=0 Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.672654 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4kfw" event={"ID":"f08ff940-289f-47d9-ad55-1cce5fddb6a5","Type":"ContainerDied","Data":"768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa"} Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.672678 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x4kfw" event={"ID":"f08ff940-289f-47d9-ad55-1cce5fddb6a5","Type":"ContainerDied","Data":"3de4b789f885f82423b53caf73f2bb1f424cab259a9a512862687005e983666e"} Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.672687 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x4kfw" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.672695 4992 scope.go:117] "RemoveContainer" containerID="768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.708342 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4kfw"] Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.718515 4992 scope.go:117] "RemoveContainer" containerID="7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.731252 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x4kfw"] Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.742659 4992 scope.go:117] "RemoveContainer" containerID="5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.793108 4992 scope.go:117] "RemoveContainer" containerID="768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa" Dec 11 09:27:43 crc kubenswrapper[4992]: E1211 09:27:43.793432 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa\": container with ID starting with 768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa not found: ID does not exist" containerID="768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.793461 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa"} err="failed to get container status \"768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa\": rpc error: code = NotFound desc = could not find container \"768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa\": container with ID starting with 768f9ae2612ceeeb4fe957d390c64d2f4754c3fc8b168c7aba87892c30b2ccaa not found: ID does not exist" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.793486 4992 scope.go:117] "RemoveContainer" containerID="7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e" Dec 11 09:27:43 crc kubenswrapper[4992]: E1211 09:27:43.793824 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e\": container with ID starting with 7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e not found: ID does not exist" containerID="7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.793869 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e"} err="failed to get container status \"7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e\": rpc error: code = NotFound desc = could not find container \"7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e\": container with ID starting with 7f34569c51010e779c295dc44148f02e5f1fb62934874b13aabb3f918d1a242e not found: ID does not exist" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.793903 4992 scope.go:117] "RemoveContainer" containerID="5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7" Dec 11 09:27:43 crc kubenswrapper[4992]: E1211 09:27:43.794221 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7\": container with ID starting with 5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7 not found: ID does not exist" containerID="5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7" Dec 11 09:27:43 crc kubenswrapper[4992]: I1211 09:27:43.794245 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7"} err="failed to get container status \"5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7\": rpc error: code = NotFound desc = could not find container \"5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7\": container with ID starting with 5bfb6a1555caa0b7d9ca28584ce394d695264c01c178455710af6d81aeef95f7 not found: ID does not exist" Dec 11 09:27:44 crc kubenswrapper[4992]: I1211 09:27:44.106130 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" path="/var/lib/kubelet/pods/f08ff940-289f-47d9-ad55-1cce5fddb6a5/volumes" Dec 11 09:27:50 crc kubenswrapper[4992]: I1211 09:27:50.095458 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:27:50 crc kubenswrapper[4992]: E1211 09:27:50.096343 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:27:50 crc kubenswrapper[4992]: I1211 09:27:50.714859 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-wkx28_65477465-ca8e-4379-bb0a-7940542990f7/nmstate-console-plugin/0.log" Dec 11 09:27:50 crc kubenswrapper[4992]: I1211 09:27:50.907171 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sh9d8_865e017c-606b-41e4-82f7-3e0520607d02/nmstate-handler/0.log" Dec 11 09:27:51 crc kubenswrapper[4992]: I1211 09:27:51.048096 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-6csbm_a098ff29-c757-4eac-b38d-33f5d50e1aea/nmstate-metrics/0.log" Dec 11 09:27:51 crc kubenswrapper[4992]: I1211 09:27:51.050556 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-6csbm_a098ff29-c757-4eac-b38d-33f5d50e1aea/kube-rbac-proxy/0.log" Dec 11 09:27:51 crc kubenswrapper[4992]: I1211 09:27:51.185255 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-mxq6q_2dec2cea-664e-4421-8e41-8c15c02aa08f/nmstate-operator/0.log" Dec 11 09:27:51 crc kubenswrapper[4992]: I1211 09:27:51.276685 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-t6ptn_ba5caba0-f6b6-400d-ab83-b1079de7af46/nmstate-webhook/0.log" Dec 11 09:28:05 crc kubenswrapper[4992]: I1211 09:28:05.095392 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:28:05 crc kubenswrapper[4992]: E1211 09:28:05.096231 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:28:08 crc kubenswrapper[4992]: I1211 09:28:08.060614 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-g47kr_8b4f416d-3812-4dc9-8fa4-5667d5f2339b/kube-rbac-proxy/0.log" Dec 11 09:28:08 crc kubenswrapper[4992]: I1211 09:28:08.183145 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-g47kr_8b4f416d-3812-4dc9-8fa4-5667d5f2339b/controller/0.log" Dec 11 09:28:08 crc kubenswrapper[4992]: I1211 09:28:08.474520 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-frr-files/0.log" Dec 11 09:28:08 crc kubenswrapper[4992]: I1211 09:28:08.715782 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-frr-files/0.log" Dec 11 09:28:08 crc kubenswrapper[4992]: I1211 09:28:08.760305 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-metrics/0.log" Dec 11 09:28:08 crc kubenswrapper[4992]: I1211 09:28:08.764379 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-reloader/0.log" Dec 11 09:28:08 crc kubenswrapper[4992]: I1211 09:28:08.873157 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-reloader/0.log" Dec 11 09:28:08 crc kubenswrapper[4992]: I1211 09:28:08.993424 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-frr-files/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.020059 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-reloader/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.036629 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-metrics/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.093901 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-metrics/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.262596 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-reloader/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.283903 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-metrics/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.292411 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-frr-files/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.329847 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/controller/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.479373 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/kube-rbac-proxy/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.514596 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/frr-metrics/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.568308 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/kube-rbac-proxy-frr/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.710410 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/reloader/0.log" Dec 11 09:28:09 crc kubenswrapper[4992]: I1211 09:28:09.827076 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-wjv4l_3cb24660-a51b-4701-a0e2-f4edc25d0960/frr-k8s-webhook-server/0.log" Dec 11 09:28:10 crc kubenswrapper[4992]: I1211 09:28:10.020814 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7c6f79466f-zkrkf_48361753-e5d3-4311-b9e0-78de22981923/manager/0.log" Dec 11 09:28:10 crc kubenswrapper[4992]: I1211 09:28:10.148434 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7d64577cd-5nznr_19d7371c-f87c-44c9-868e-636a222d606f/webhook-server/0.log" Dec 11 09:28:10 crc kubenswrapper[4992]: I1211 09:28:10.362927 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lfvx8_07a93a81-2773-49bc-a345-528d2d52dbd6/kube-rbac-proxy/0.log" Dec 11 09:28:10 crc kubenswrapper[4992]: I1211 09:28:10.725550 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/frr/0.log" Dec 11 09:28:10 crc kubenswrapper[4992]: I1211 09:28:10.823015 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lfvx8_07a93a81-2773-49bc-a345-528d2d52dbd6/speaker/0.log" Dec 11 09:28:18 crc kubenswrapper[4992]: I1211 09:28:18.095020 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:28:18 crc kubenswrapper[4992]: E1211 09:28:18.096004 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:28:23 crc kubenswrapper[4992]: I1211 09:28:23.464260 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/util/0.log" Dec 11 09:28:23 crc kubenswrapper[4992]: I1211 09:28:23.598433 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/util/0.log" Dec 11 09:28:23 crc kubenswrapper[4992]: I1211 09:28:23.648894 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/pull/0.log" Dec 11 09:28:23 crc kubenswrapper[4992]: I1211 09:28:23.680120 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/pull/0.log" Dec 11 09:28:23 crc kubenswrapper[4992]: I1211 09:28:23.794241 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/pull/0.log" Dec 11 09:28:23 crc kubenswrapper[4992]: I1211 09:28:23.797421 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/util/0.log" Dec 11 09:28:23 crc kubenswrapper[4992]: I1211 09:28:23.839601 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/extract/0.log" Dec 11 09:28:23 crc kubenswrapper[4992]: I1211 09:28:23.964558 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/util/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.139544 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/util/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.142726 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/pull/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.153328 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/pull/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.356844 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/util/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.399663 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/pull/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.418530 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/extract/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.584660 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-utilities/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.733472 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-content/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.755913 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-utilities/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.779793 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-content/0.log" Dec 11 09:28:24 crc kubenswrapper[4992]: I1211 09:28:24.963576 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-content/0.log" Dec 11 09:28:25 crc kubenswrapper[4992]: I1211 09:28:25.037243 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-utilities/0.log" Dec 11 09:28:25 crc kubenswrapper[4992]: I1211 09:28:25.356114 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/registry-server/0.log" Dec 11 09:28:25 crc kubenswrapper[4992]: I1211 09:28:25.386335 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-utilities/0.log" Dec 11 09:28:25 crc kubenswrapper[4992]: I1211 09:28:25.552191 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-utilities/0.log" Dec 11 09:28:25 crc kubenswrapper[4992]: I1211 09:28:25.584032 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-content/0.log" Dec 11 09:28:25 crc kubenswrapper[4992]: I1211 09:28:25.586958 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-content/0.log" Dec 11 09:28:25 crc kubenswrapper[4992]: I1211 09:28:25.734272 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-utilities/0.log" Dec 11 09:28:25 crc kubenswrapper[4992]: I1211 09:28:25.781772 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-content/0.log" Dec 11 09:28:25 crc kubenswrapper[4992]: I1211 09:28:25.958426 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-flz8s_f3229bda-a4f4-42ac-8936-829ab828fce4/marketplace-operator/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.096882 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-utilities/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.273018 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-content/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.284135 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/registry-server/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.323367 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-utilities/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.362431 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-content/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.494730 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-content/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.496505 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-utilities/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.689997 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/registry-server/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.702313 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-utilities/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.860216 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-content/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.876690 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-content/0.log" Dec 11 09:28:26 crc kubenswrapper[4992]: I1211 09:28:26.889394 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-utilities/0.log" Dec 11 09:28:27 crc kubenswrapper[4992]: I1211 09:28:27.042321 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-content/0.log" Dec 11 09:28:27 crc kubenswrapper[4992]: I1211 09:28:27.044837 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-utilities/0.log" Dec 11 09:28:27 crc kubenswrapper[4992]: I1211 09:28:27.631212 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/registry-server/0.log" Dec 11 09:28:32 crc kubenswrapper[4992]: I1211 09:28:32.094471 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:28:32 crc kubenswrapper[4992]: E1211 09:28:32.095154 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:28:43 crc kubenswrapper[4992]: I1211 09:28:43.094858 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:28:43 crc kubenswrapper[4992]: E1211 09:28:43.095607 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:28:57 crc kubenswrapper[4992]: I1211 09:28:57.095240 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:28:57 crc kubenswrapper[4992]: E1211 09:28:57.096113 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:29:08 crc kubenswrapper[4992]: I1211 09:29:08.096186 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:29:08 crc kubenswrapper[4992]: E1211 09:29:08.097059 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:29:19 crc kubenswrapper[4992]: I1211 09:29:19.096044 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:29:19 crc kubenswrapper[4992]: E1211 09:29:19.097255 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:29:32 crc kubenswrapper[4992]: I1211 09:29:32.094941 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:29:32 crc kubenswrapper[4992]: E1211 09:29:32.095897 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:29:47 crc kubenswrapper[4992]: I1211 09:29:47.095118 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:29:47 crc kubenswrapper[4992]: I1211 09:29:47.850888 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"70a483776bc5c324d5f53833009f43fbeb28d39591e52e9b29cefc41b9c1b645"} Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.185611 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9"] Dec 11 09:30:00 crc kubenswrapper[4992]: E1211 09:30:00.186868 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerName="extract-content" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.186886 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerName="extract-content" Dec 11 09:30:00 crc kubenswrapper[4992]: E1211 09:30:00.186911 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerName="registry-server" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.186919 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerName="registry-server" Dec 11 09:30:00 crc kubenswrapper[4992]: E1211 09:30:00.186957 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerName="extract-utilities" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.186968 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerName="extract-utilities" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.187200 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08ff940-289f-47d9-ad55-1cce5fddb6a5" containerName="registry-server" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.188051 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.191148 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.191564 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.202230 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9"] Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.307517 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/440f687a-1ef8-43a3-965f-eb72b04ba957-secret-volume\") pod \"collect-profiles-29424090-jhzv9\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.308060 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/440f687a-1ef8-43a3-965f-eb72b04ba957-config-volume\") pod \"collect-profiles-29424090-jhzv9\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.308256 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcx8k\" (UniqueName: \"kubernetes.io/projected/440f687a-1ef8-43a3-965f-eb72b04ba957-kube-api-access-lcx8k\") pod \"collect-profiles-29424090-jhzv9\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.410557 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/440f687a-1ef8-43a3-965f-eb72b04ba957-secret-volume\") pod \"collect-profiles-29424090-jhzv9\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.410723 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/440f687a-1ef8-43a3-965f-eb72b04ba957-config-volume\") pod \"collect-profiles-29424090-jhzv9\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.410998 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcx8k\" (UniqueName: \"kubernetes.io/projected/440f687a-1ef8-43a3-965f-eb72b04ba957-kube-api-access-lcx8k\") pod \"collect-profiles-29424090-jhzv9\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.411801 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/440f687a-1ef8-43a3-965f-eb72b04ba957-config-volume\") pod \"collect-profiles-29424090-jhzv9\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.464181 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/440f687a-1ef8-43a3-965f-eb72b04ba957-secret-volume\") pod \"collect-profiles-29424090-jhzv9\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.480194 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcx8k\" (UniqueName: \"kubernetes.io/projected/440f687a-1ef8-43a3-965f-eb72b04ba957-kube-api-access-lcx8k\") pod \"collect-profiles-29424090-jhzv9\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.522304 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:00 crc kubenswrapper[4992]: I1211 09:30:00.990452 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9"] Dec 11 09:30:00 crc kubenswrapper[4992]: W1211 09:30:00.995824 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod440f687a_1ef8_43a3_965f_eb72b04ba957.slice/crio-1319263f3feee270f78b3f15c6f5a9a136fc443c962861d44ec88b29f20157ab WatchSource:0}: Error finding container 1319263f3feee270f78b3f15c6f5a9a136fc443c962861d44ec88b29f20157ab: Status 404 returned error can't find the container with id 1319263f3feee270f78b3f15c6f5a9a136fc443c962861d44ec88b29f20157ab Dec 11 09:30:01 crc kubenswrapper[4992]: I1211 09:30:01.969101 4992 generic.go:334] "Generic (PLEG): container finished" podID="440f687a-1ef8-43a3-965f-eb72b04ba957" containerID="6dc299fd0e37094644f7bbe012d7eb3c03e4d19eb46f8ac293e69a71be87dbdc" exitCode=0 Dec 11 09:30:01 crc kubenswrapper[4992]: I1211 09:30:01.969335 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" event={"ID":"440f687a-1ef8-43a3-965f-eb72b04ba957","Type":"ContainerDied","Data":"6dc299fd0e37094644f7bbe012d7eb3c03e4d19eb46f8ac293e69a71be87dbdc"} Dec 11 09:30:01 crc kubenswrapper[4992]: I1211 09:30:01.969419 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" event={"ID":"440f687a-1ef8-43a3-965f-eb72b04ba957","Type":"ContainerStarted","Data":"1319263f3feee270f78b3f15c6f5a9a136fc443c962861d44ec88b29f20157ab"} Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.453698 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.578488 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcx8k\" (UniqueName: \"kubernetes.io/projected/440f687a-1ef8-43a3-965f-eb72b04ba957-kube-api-access-lcx8k\") pod \"440f687a-1ef8-43a3-965f-eb72b04ba957\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.578610 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/440f687a-1ef8-43a3-965f-eb72b04ba957-config-volume\") pod \"440f687a-1ef8-43a3-965f-eb72b04ba957\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.578723 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/440f687a-1ef8-43a3-965f-eb72b04ba957-secret-volume\") pod \"440f687a-1ef8-43a3-965f-eb72b04ba957\" (UID: \"440f687a-1ef8-43a3-965f-eb72b04ba957\") " Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.579761 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/440f687a-1ef8-43a3-965f-eb72b04ba957-config-volume" (OuterVolumeSpecName: "config-volume") pod "440f687a-1ef8-43a3-965f-eb72b04ba957" (UID: "440f687a-1ef8-43a3-965f-eb72b04ba957"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.607041 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440f687a-1ef8-43a3-965f-eb72b04ba957-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "440f687a-1ef8-43a3-965f-eb72b04ba957" (UID: "440f687a-1ef8-43a3-965f-eb72b04ba957"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.623858 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440f687a-1ef8-43a3-965f-eb72b04ba957-kube-api-access-lcx8k" (OuterVolumeSpecName: "kube-api-access-lcx8k") pod "440f687a-1ef8-43a3-965f-eb72b04ba957" (UID: "440f687a-1ef8-43a3-965f-eb72b04ba957"). InnerVolumeSpecName "kube-api-access-lcx8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.680942 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcx8k\" (UniqueName: \"kubernetes.io/projected/440f687a-1ef8-43a3-965f-eb72b04ba957-kube-api-access-lcx8k\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.680980 4992 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/440f687a-1ef8-43a3-965f-eb72b04ba957-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.680989 4992 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/440f687a-1ef8-43a3-965f-eb72b04ba957-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.990430 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" event={"ID":"440f687a-1ef8-43a3-965f-eb72b04ba957","Type":"ContainerDied","Data":"1319263f3feee270f78b3f15c6f5a9a136fc443c962861d44ec88b29f20157ab"} Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.990472 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1319263f3feee270f78b3f15c6f5a9a136fc443c962861d44ec88b29f20157ab" Dec 11 09:30:03 crc kubenswrapper[4992]: I1211 09:30:03.990491 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424090-jhzv9" Dec 11 09:30:04 crc kubenswrapper[4992]: I1211 09:30:04.538141 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd"] Dec 11 09:30:04 crc kubenswrapper[4992]: I1211 09:30:04.552750 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424045-gtqsd"] Dec 11 09:30:06 crc kubenswrapper[4992]: I1211 09:30:06.108279 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a301df2-43ad-4899-8ac7-548594484377" path="/var/lib/kubelet/pods/5a301df2-43ad-4899-8ac7-548594484377/volumes" Dec 11 09:30:15 crc kubenswrapper[4992]: I1211 09:30:15.131681 4992 generic.go:334] "Generic (PLEG): container finished" podID="5d23c00b-c5f8-4eab-9e65-0765282574eb" containerID="25da01794a90bd7ed87a8f5cd5e5ad0736516367b36286d63448dc1be2a53ebc" exitCode=0 Dec 11 09:30:15 crc kubenswrapper[4992]: I1211 09:30:15.132557 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-knqll/must-gather-mj6l2" event={"ID":"5d23c00b-c5f8-4eab-9e65-0765282574eb","Type":"ContainerDied","Data":"25da01794a90bd7ed87a8f5cd5e5ad0736516367b36286d63448dc1be2a53ebc"} Dec 11 09:30:15 crc kubenswrapper[4992]: I1211 09:30:15.133647 4992 scope.go:117] "RemoveContainer" containerID="25da01794a90bd7ed87a8f5cd5e5ad0736516367b36286d63448dc1be2a53ebc" Dec 11 09:30:16 crc kubenswrapper[4992]: I1211 09:30:16.085839 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-knqll_must-gather-mj6l2_5d23c00b-c5f8-4eab-9e65-0765282574eb/gather/0.log" Dec 11 09:30:18 crc kubenswrapper[4992]: I1211 09:30:18.977862 4992 scope.go:117] "RemoveContainer" containerID="ad474b79d8425729a102bafcb29bc5ad5541b536801b099413d7991c28175d5b" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.493094 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cnkzq"] Dec 11 09:30:20 crc kubenswrapper[4992]: E1211 09:30:20.493842 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440f687a-1ef8-43a3-965f-eb72b04ba957" containerName="collect-profiles" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.493859 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="440f687a-1ef8-43a3-965f-eb72b04ba957" containerName="collect-profiles" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.494116 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="440f687a-1ef8-43a3-965f-eb72b04ba957" containerName="collect-profiles" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.495477 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.508137 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnkzq"] Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.527977 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-utilities\") pod \"redhat-operators-cnkzq\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.528070 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kxhf\" (UniqueName: \"kubernetes.io/projected/810cda00-4704-4c51-aa12-4450fdd052c1-kube-api-access-7kxhf\") pod \"redhat-operators-cnkzq\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.528174 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-catalog-content\") pod \"redhat-operators-cnkzq\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.629586 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-utilities\") pod \"redhat-operators-cnkzq\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.629706 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kxhf\" (UniqueName: \"kubernetes.io/projected/810cda00-4704-4c51-aa12-4450fdd052c1-kube-api-access-7kxhf\") pod \"redhat-operators-cnkzq\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.629809 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-catalog-content\") pod \"redhat-operators-cnkzq\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.630242 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-utilities\") pod \"redhat-operators-cnkzq\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.630260 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-catalog-content\") pod \"redhat-operators-cnkzq\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.658525 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kxhf\" (UniqueName: \"kubernetes.io/projected/810cda00-4704-4c51-aa12-4450fdd052c1-kube-api-access-7kxhf\") pod \"redhat-operators-cnkzq\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:20 crc kubenswrapper[4992]: I1211 09:30:20.822203 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:21 crc kubenswrapper[4992]: I1211 09:30:21.335573 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnkzq"] Dec 11 09:30:22 crc kubenswrapper[4992]: I1211 09:30:22.203103 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkzq" event={"ID":"810cda00-4704-4c51-aa12-4450fdd052c1","Type":"ContainerStarted","Data":"55196b718c409ecf96e1a35e3be834111f4b9002bdceb4ea0e25f4371ca4e1b7"} Dec 11 09:30:22 crc kubenswrapper[4992]: I1211 09:30:22.203584 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkzq" event={"ID":"810cda00-4704-4c51-aa12-4450fdd052c1","Type":"ContainerStarted","Data":"6cdb725bfe70027fa75394dec1e8a351b6f0854d9b0c11cabc2da931d73050e9"} Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.214435 4992 generic.go:334] "Generic (PLEG): container finished" podID="810cda00-4704-4c51-aa12-4450fdd052c1" containerID="55196b718c409ecf96e1a35e3be834111f4b9002bdceb4ea0e25f4371ca4e1b7" exitCode=0 Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.214485 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkzq" event={"ID":"810cda00-4704-4c51-aa12-4450fdd052c1","Type":"ContainerDied","Data":"55196b718c409ecf96e1a35e3be834111f4b9002bdceb4ea0e25f4371ca4e1b7"} Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.218053 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.471683 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bdwck"] Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.474228 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.490607 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdwck"] Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.589492 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-utilities\") pod \"certified-operators-bdwck\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.589566 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8z5\" (UniqueName: \"kubernetes.io/projected/2e074e0d-067b-4a1b-904d-35cbbc936370-kube-api-access-tb8z5\") pod \"certified-operators-bdwck\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.589748 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-catalog-content\") pod \"certified-operators-bdwck\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.691458 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-catalog-content\") pod \"certified-operators-bdwck\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.692119 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-catalog-content\") pod \"certified-operators-bdwck\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.692127 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-utilities\") pod \"certified-operators-bdwck\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.692279 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8z5\" (UniqueName: \"kubernetes.io/projected/2e074e0d-067b-4a1b-904d-35cbbc936370-kube-api-access-tb8z5\") pod \"certified-operators-bdwck\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.692749 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-utilities\") pod \"certified-operators-bdwck\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.714547 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8z5\" (UniqueName: \"kubernetes.io/projected/2e074e0d-067b-4a1b-904d-35cbbc936370-kube-api-access-tb8z5\") pod \"certified-operators-bdwck\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:23 crc kubenswrapper[4992]: I1211 09:30:23.795942 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:24 crc kubenswrapper[4992]: I1211 09:30:24.275288 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bdwck"] Dec 11 09:30:24 crc kubenswrapper[4992]: W1211 09:30:24.278643 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e074e0d_067b_4a1b_904d_35cbbc936370.slice/crio-cc52ec67e7e603a9b3e92fc269a277788e3839654b488e3979af291d30ce15bb WatchSource:0}: Error finding container cc52ec67e7e603a9b3e92fc269a277788e3839654b488e3979af291d30ce15bb: Status 404 returned error can't find the container with id cc52ec67e7e603a9b3e92fc269a277788e3839654b488e3979af291d30ce15bb Dec 11 09:30:24 crc kubenswrapper[4992]: I1211 09:30:24.700979 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-knqll/must-gather-mj6l2"] Dec 11 09:30:24 crc kubenswrapper[4992]: I1211 09:30:24.701603 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-knqll/must-gather-mj6l2" podUID="5d23c00b-c5f8-4eab-9e65-0765282574eb" containerName="copy" containerID="cri-o://4d8a24f2135f9281aa8c82ae0f5e3253a59e5ed5ea7d76cfb297a40cca535137" gracePeriod=2 Dec 11 09:30:24 crc kubenswrapper[4992]: I1211 09:30:24.710258 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-knqll/must-gather-mj6l2"] Dec 11 09:30:25 crc kubenswrapper[4992]: I1211 09:30:25.236105 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-knqll_must-gather-mj6l2_5d23c00b-c5f8-4eab-9e65-0765282574eb/copy/0.log" Dec 11 09:30:25 crc kubenswrapper[4992]: I1211 09:30:25.236991 4992 generic.go:334] "Generic (PLEG): container finished" podID="5d23c00b-c5f8-4eab-9e65-0765282574eb" containerID="4d8a24f2135f9281aa8c82ae0f5e3253a59e5ed5ea7d76cfb297a40cca535137" exitCode=143 Dec 11 09:30:25 crc kubenswrapper[4992]: I1211 09:30:25.238827 4992 generic.go:334] "Generic (PLEG): container finished" podID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerID="f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66" exitCode=0 Dec 11 09:30:25 crc kubenswrapper[4992]: I1211 09:30:25.238865 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdwck" event={"ID":"2e074e0d-067b-4a1b-904d-35cbbc936370","Type":"ContainerDied","Data":"f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66"} Dec 11 09:30:25 crc kubenswrapper[4992]: I1211 09:30:25.238888 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdwck" event={"ID":"2e074e0d-067b-4a1b-904d-35cbbc936370","Type":"ContainerStarted","Data":"cc52ec67e7e603a9b3e92fc269a277788e3839654b488e3979af291d30ce15bb"} Dec 11 09:30:26 crc kubenswrapper[4992]: I1211 09:30:26.604342 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-knqll_must-gather-mj6l2_5d23c00b-c5f8-4eab-9e65-0765282574eb/copy/0.log" Dec 11 09:30:26 crc kubenswrapper[4992]: I1211 09:30:26.605450 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:30:26 crc kubenswrapper[4992]: I1211 09:30:26.652668 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d23c00b-c5f8-4eab-9e65-0765282574eb-must-gather-output\") pod \"5d23c00b-c5f8-4eab-9e65-0765282574eb\" (UID: \"5d23c00b-c5f8-4eab-9e65-0765282574eb\") " Dec 11 09:30:26 crc kubenswrapper[4992]: I1211 09:30:26.652933 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp5gr\" (UniqueName: \"kubernetes.io/projected/5d23c00b-c5f8-4eab-9e65-0765282574eb-kube-api-access-bp5gr\") pod \"5d23c00b-c5f8-4eab-9e65-0765282574eb\" (UID: \"5d23c00b-c5f8-4eab-9e65-0765282574eb\") " Dec 11 09:30:26 crc kubenswrapper[4992]: I1211 09:30:26.658865 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d23c00b-c5f8-4eab-9e65-0765282574eb-kube-api-access-bp5gr" (OuterVolumeSpecName: "kube-api-access-bp5gr") pod "5d23c00b-c5f8-4eab-9e65-0765282574eb" (UID: "5d23c00b-c5f8-4eab-9e65-0765282574eb"). InnerVolumeSpecName "kube-api-access-bp5gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:30:26 crc kubenswrapper[4992]: I1211 09:30:26.755161 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp5gr\" (UniqueName: \"kubernetes.io/projected/5d23c00b-c5f8-4eab-9e65-0765282574eb-kube-api-access-bp5gr\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:26 crc kubenswrapper[4992]: I1211 09:30:26.804363 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d23c00b-c5f8-4eab-9e65-0765282574eb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "5d23c00b-c5f8-4eab-9e65-0765282574eb" (UID: "5d23c00b-c5f8-4eab-9e65-0765282574eb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:30:26 crc kubenswrapper[4992]: I1211 09:30:26.856588 4992 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/5d23c00b-c5f8-4eab-9e65-0765282574eb-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:27 crc kubenswrapper[4992]: I1211 09:30:27.256571 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-knqll_must-gather-mj6l2_5d23c00b-c5f8-4eab-9e65-0765282574eb/copy/0.log" Dec 11 09:30:27 crc kubenswrapper[4992]: I1211 09:30:27.256946 4992 scope.go:117] "RemoveContainer" containerID="4d8a24f2135f9281aa8c82ae0f5e3253a59e5ed5ea7d76cfb297a40cca535137" Dec 11 09:30:27 crc kubenswrapper[4992]: I1211 09:30:27.257115 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-knqll/must-gather-mj6l2" Dec 11 09:30:28 crc kubenswrapper[4992]: I1211 09:30:28.113284 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d23c00b-c5f8-4eab-9e65-0765282574eb" path="/var/lib/kubelet/pods/5d23c00b-c5f8-4eab-9e65-0765282574eb/volumes" Dec 11 09:30:35 crc kubenswrapper[4992]: I1211 09:30:35.446479 4992 scope.go:117] "RemoveContainer" containerID="25da01794a90bd7ed87a8f5cd5e5ad0736516367b36286d63448dc1be2a53ebc" Dec 11 09:30:37 crc kubenswrapper[4992]: I1211 09:30:37.352909 4992 generic.go:334] "Generic (PLEG): container finished" podID="810cda00-4704-4c51-aa12-4450fdd052c1" containerID="45c5d3d4f0f799b3f954e97707e9c343ae5a1c474398f8d2fa6ab709d8942a55" exitCode=0 Dec 11 09:30:37 crc kubenswrapper[4992]: I1211 09:30:37.353003 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkzq" event={"ID":"810cda00-4704-4c51-aa12-4450fdd052c1","Type":"ContainerDied","Data":"45c5d3d4f0f799b3f954e97707e9c343ae5a1c474398f8d2fa6ab709d8942a55"} Dec 11 09:30:37 crc kubenswrapper[4992]: I1211 09:30:37.356584 4992 generic.go:334] "Generic (PLEG): container finished" podID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerID="546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806" exitCode=0 Dec 11 09:30:37 crc kubenswrapper[4992]: I1211 09:30:37.356620 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdwck" event={"ID":"2e074e0d-067b-4a1b-904d-35cbbc936370","Type":"ContainerDied","Data":"546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806"} Dec 11 09:30:38 crc kubenswrapper[4992]: I1211 09:30:38.369038 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkzq" event={"ID":"810cda00-4704-4c51-aa12-4450fdd052c1","Type":"ContainerStarted","Data":"e3f9a377585ab3260500d35b1392ebcc10e20dab87381fd96adc0971819ecf6e"} Dec 11 09:30:39 crc kubenswrapper[4992]: I1211 09:30:39.382375 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdwck" event={"ID":"2e074e0d-067b-4a1b-904d-35cbbc936370","Type":"ContainerStarted","Data":"406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5"} Dec 11 09:30:39 crc kubenswrapper[4992]: I1211 09:30:39.401712 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cnkzq" podStartSLOduration=4.810423934 podStartE2EDuration="19.401691484s" podCreationTimestamp="2025-12-11 09:30:20 +0000 UTC" firstStartedPulling="2025-12-11 09:30:23.217722517 +0000 UTC m=+4047.477196453" lastFinishedPulling="2025-12-11 09:30:37.808990087 +0000 UTC m=+4062.068464003" observedRunningTime="2025-12-11 09:30:38.396990534 +0000 UTC m=+4062.656464460" watchObservedRunningTime="2025-12-11 09:30:39.401691484 +0000 UTC m=+4063.661165410" Dec 11 09:30:39 crc kubenswrapper[4992]: I1211 09:30:39.411992 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bdwck" podStartSLOduration=2.810218637 podStartE2EDuration="16.411969065s" podCreationTimestamp="2025-12-11 09:30:23 +0000 UTC" firstStartedPulling="2025-12-11 09:30:25.241338075 +0000 UTC m=+4049.500812001" lastFinishedPulling="2025-12-11 09:30:38.843088503 +0000 UTC m=+4063.102562429" observedRunningTime="2025-12-11 09:30:39.410653263 +0000 UTC m=+4063.670127209" watchObservedRunningTime="2025-12-11 09:30:39.411969065 +0000 UTC m=+4063.671443001" Dec 11 09:30:40 crc kubenswrapper[4992]: I1211 09:30:40.822870 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:40 crc kubenswrapper[4992]: I1211 09:30:40.823536 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:41 crc kubenswrapper[4992]: I1211 09:30:41.869422 4992 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cnkzq" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" containerName="registry-server" probeResult="failure" output=< Dec 11 09:30:41 crc kubenswrapper[4992]: timeout: failed to connect service ":50051" within 1s Dec 11 09:30:41 crc kubenswrapper[4992]: > Dec 11 09:30:43 crc kubenswrapper[4992]: I1211 09:30:43.796292 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:43 crc kubenswrapper[4992]: I1211 09:30:43.796785 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:43 crc kubenswrapper[4992]: I1211 09:30:43.845676 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:44 crc kubenswrapper[4992]: I1211 09:30:44.482570 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:44 crc kubenswrapper[4992]: I1211 09:30:44.544401 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bdwck"] Dec 11 09:30:46 crc kubenswrapper[4992]: I1211 09:30:46.448395 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bdwck" podUID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerName="registry-server" containerID="cri-o://406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5" gracePeriod=2 Dec 11 09:30:46 crc kubenswrapper[4992]: I1211 09:30:46.906546 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:46 crc kubenswrapper[4992]: I1211 09:30:46.945193 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-catalog-content\") pod \"2e074e0d-067b-4a1b-904d-35cbbc936370\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " Dec 11 09:30:46 crc kubenswrapper[4992]: I1211 09:30:46.945405 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-utilities\") pod \"2e074e0d-067b-4a1b-904d-35cbbc936370\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " Dec 11 09:30:46 crc kubenswrapper[4992]: I1211 09:30:46.945498 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb8z5\" (UniqueName: \"kubernetes.io/projected/2e074e0d-067b-4a1b-904d-35cbbc936370-kube-api-access-tb8z5\") pod \"2e074e0d-067b-4a1b-904d-35cbbc936370\" (UID: \"2e074e0d-067b-4a1b-904d-35cbbc936370\") " Dec 11 09:30:46 crc kubenswrapper[4992]: I1211 09:30:46.946191 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-utilities" (OuterVolumeSpecName: "utilities") pod "2e074e0d-067b-4a1b-904d-35cbbc936370" (UID: "2e074e0d-067b-4a1b-904d-35cbbc936370"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:30:46 crc kubenswrapper[4992]: I1211 09:30:46.953116 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e074e0d-067b-4a1b-904d-35cbbc936370-kube-api-access-tb8z5" (OuterVolumeSpecName: "kube-api-access-tb8z5") pod "2e074e0d-067b-4a1b-904d-35cbbc936370" (UID: "2e074e0d-067b-4a1b-904d-35cbbc936370"). InnerVolumeSpecName "kube-api-access-tb8z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.020467 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e074e0d-067b-4a1b-904d-35cbbc936370" (UID: "2e074e0d-067b-4a1b-904d-35cbbc936370"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.047763 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.047796 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e074e0d-067b-4a1b-904d-35cbbc936370-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.047816 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb8z5\" (UniqueName: \"kubernetes.io/projected/2e074e0d-067b-4a1b-904d-35cbbc936370-kube-api-access-tb8z5\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.460407 4992 generic.go:334] "Generic (PLEG): container finished" podID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerID="406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5" exitCode=0 Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.460470 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdwck" event={"ID":"2e074e0d-067b-4a1b-904d-35cbbc936370","Type":"ContainerDied","Data":"406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5"} Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.460802 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bdwck" event={"ID":"2e074e0d-067b-4a1b-904d-35cbbc936370","Type":"ContainerDied","Data":"cc52ec67e7e603a9b3e92fc269a277788e3839654b488e3979af291d30ce15bb"} Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.460496 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bdwck" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.460829 4992 scope.go:117] "RemoveContainer" containerID="406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.480093 4992 scope.go:117] "RemoveContainer" containerID="546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.499024 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bdwck"] Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.509210 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bdwck"] Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.511406 4992 scope.go:117] "RemoveContainer" containerID="f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.555216 4992 scope.go:117] "RemoveContainer" containerID="406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5" Dec 11 09:30:47 crc kubenswrapper[4992]: E1211 09:30:47.555782 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5\": container with ID starting with 406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5 not found: ID does not exist" containerID="406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.555819 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5"} err="failed to get container status \"406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5\": rpc error: code = NotFound desc = could not find container \"406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5\": container with ID starting with 406f71db855f63b6fdb88c335b87e5c77b33ca02ba1121c82d4b94b0013767c5 not found: ID does not exist" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.555852 4992 scope.go:117] "RemoveContainer" containerID="546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806" Dec 11 09:30:47 crc kubenswrapper[4992]: E1211 09:30:47.556372 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806\": container with ID starting with 546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806 not found: ID does not exist" containerID="546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.556421 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806"} err="failed to get container status \"546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806\": rpc error: code = NotFound desc = could not find container \"546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806\": container with ID starting with 546b7ffacf7fb3805e16a6c572c9ebb79dd1aa8c28c951385968a93a1ba12806 not found: ID does not exist" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.556460 4992 scope.go:117] "RemoveContainer" containerID="f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66" Dec 11 09:30:47 crc kubenswrapper[4992]: E1211 09:30:47.556863 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66\": container with ID starting with f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66 not found: ID does not exist" containerID="f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66" Dec 11 09:30:47 crc kubenswrapper[4992]: I1211 09:30:47.556909 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66"} err="failed to get container status \"f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66\": rpc error: code = NotFound desc = could not find container \"f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66\": container with ID starting with f642bb83b342a1a2b719d60ebfd423acaa7e3e5f74423eab5d1472f14bdc6c66 not found: ID does not exist" Dec 11 09:30:48 crc kubenswrapper[4992]: I1211 09:30:48.105588 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e074e0d-067b-4a1b-904d-35cbbc936370" path="/var/lib/kubelet/pods/2e074e0d-067b-4a1b-904d-35cbbc936370/volumes" Dec 11 09:30:50 crc kubenswrapper[4992]: I1211 09:30:50.869474 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:50 crc kubenswrapper[4992]: I1211 09:30:50.911302 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:51 crc kubenswrapper[4992]: I1211 09:30:51.685181 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnkzq"] Dec 11 09:30:52 crc kubenswrapper[4992]: I1211 09:30:52.505725 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cnkzq" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" containerName="registry-server" containerID="cri-o://e3f9a377585ab3260500d35b1392ebcc10e20dab87381fd96adc0971819ecf6e" gracePeriod=2 Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.533964 4992 generic.go:334] "Generic (PLEG): container finished" podID="810cda00-4704-4c51-aa12-4450fdd052c1" containerID="e3f9a377585ab3260500d35b1392ebcc10e20dab87381fd96adc0971819ecf6e" exitCode=0 Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.534449 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkzq" event={"ID":"810cda00-4704-4c51-aa12-4450fdd052c1","Type":"ContainerDied","Data":"e3f9a377585ab3260500d35b1392ebcc10e20dab87381fd96adc0971819ecf6e"} Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.751746 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.822728 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-utilities\") pod \"810cda00-4704-4c51-aa12-4450fdd052c1\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.822957 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kxhf\" (UniqueName: \"kubernetes.io/projected/810cda00-4704-4c51-aa12-4450fdd052c1-kube-api-access-7kxhf\") pod \"810cda00-4704-4c51-aa12-4450fdd052c1\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.823257 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-catalog-content\") pod \"810cda00-4704-4c51-aa12-4450fdd052c1\" (UID: \"810cda00-4704-4c51-aa12-4450fdd052c1\") " Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.823951 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-utilities" (OuterVolumeSpecName: "utilities") pod "810cda00-4704-4c51-aa12-4450fdd052c1" (UID: "810cda00-4704-4c51-aa12-4450fdd052c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.832736 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810cda00-4704-4c51-aa12-4450fdd052c1-kube-api-access-7kxhf" (OuterVolumeSpecName: "kube-api-access-7kxhf") pod "810cda00-4704-4c51-aa12-4450fdd052c1" (UID: "810cda00-4704-4c51-aa12-4450fdd052c1"). InnerVolumeSpecName "kube-api-access-7kxhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.926074 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.926108 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kxhf\" (UniqueName: \"kubernetes.io/projected/810cda00-4704-4c51-aa12-4450fdd052c1-kube-api-access-7kxhf\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:53 crc kubenswrapper[4992]: I1211 09:30:53.938619 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "810cda00-4704-4c51-aa12-4450fdd052c1" (UID: "810cda00-4704-4c51-aa12-4450fdd052c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:30:54 crc kubenswrapper[4992]: I1211 09:30:54.027888 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/810cda00-4704-4c51-aa12-4450fdd052c1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:30:54 crc kubenswrapper[4992]: I1211 09:30:54.567609 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnkzq" event={"ID":"810cda00-4704-4c51-aa12-4450fdd052c1","Type":"ContainerDied","Data":"6cdb725bfe70027fa75394dec1e8a351b6f0854d9b0c11cabc2da931d73050e9"} Dec 11 09:30:54 crc kubenswrapper[4992]: I1211 09:30:54.567694 4992 scope.go:117] "RemoveContainer" containerID="e3f9a377585ab3260500d35b1392ebcc10e20dab87381fd96adc0971819ecf6e" Dec 11 09:30:54 crc kubenswrapper[4992]: I1211 09:30:54.567696 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnkzq" Dec 11 09:30:54 crc kubenswrapper[4992]: I1211 09:30:54.592541 4992 scope.go:117] "RemoveContainer" containerID="45c5d3d4f0f799b3f954e97707e9c343ae5a1c474398f8d2fa6ab709d8942a55" Dec 11 09:30:54 crc kubenswrapper[4992]: I1211 09:30:54.597919 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnkzq"] Dec 11 09:30:54 crc kubenswrapper[4992]: I1211 09:30:54.605703 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cnkzq"] Dec 11 09:30:54 crc kubenswrapper[4992]: I1211 09:30:54.619092 4992 scope.go:117] "RemoveContainer" containerID="55196b718c409ecf96e1a35e3be834111f4b9002bdceb4ea0e25f4371ca4e1b7" Dec 11 09:30:56 crc kubenswrapper[4992]: I1211 09:30:56.119539 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" path="/var/lib/kubelet/pods/810cda00-4704-4c51-aa12-4450fdd052c1/volumes" Dec 11 09:31:19 crc kubenswrapper[4992]: I1211 09:31:19.064843 4992 scope.go:117] "RemoveContainer" containerID="5b91f2721143fa74eaa226e1c6e6966fa7c559e97d75fb957ad8bf271ce3dae5" Dec 11 09:32:05 crc kubenswrapper[4992]: I1211 09:32:05.378912 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:32:05 crc kubenswrapper[4992]: I1211 09:32:05.379525 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:32:19 crc kubenswrapper[4992]: I1211 09:32:19.162747 4992 scope.go:117] "RemoveContainer" containerID="f28fce16447f949b5ec52b814c1c9f1210fac5cec05d46b879f33b246f9fc4f1" Dec 11 09:32:35 crc kubenswrapper[4992]: I1211 09:32:35.378810 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:32:35 crc kubenswrapper[4992]: I1211 09:32:35.379367 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:33:05 crc kubenswrapper[4992]: I1211 09:33:05.378569 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:33:05 crc kubenswrapper[4992]: I1211 09:33:05.379344 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:33:05 crc kubenswrapper[4992]: I1211 09:33:05.379390 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 09:33:05 crc kubenswrapper[4992]: I1211 09:33:05.380256 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"70a483776bc5c324d5f53833009f43fbeb28d39591e52e9b29cefc41b9c1b645"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 09:33:05 crc kubenswrapper[4992]: I1211 09:33:05.380306 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://70a483776bc5c324d5f53833009f43fbeb28d39591e52e9b29cefc41b9c1b645" gracePeriod=600 Dec 11 09:33:05 crc kubenswrapper[4992]: I1211 09:33:05.952657 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="70a483776bc5c324d5f53833009f43fbeb28d39591e52e9b29cefc41b9c1b645" exitCode=0 Dec 11 09:33:05 crc kubenswrapper[4992]: I1211 09:33:05.952738 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"70a483776bc5c324d5f53833009f43fbeb28d39591e52e9b29cefc41b9c1b645"} Dec 11 09:33:05 crc kubenswrapper[4992]: I1211 09:33:05.952974 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43"} Dec 11 09:33:05 crc kubenswrapper[4992]: I1211 09:33:05.953005 4992 scope.go:117] "RemoveContainer" containerID="0f6a189b00ab1c3c56f71713867195e57486d7a5972465f4352c9e8739824691" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.128882 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rcjl7/must-gather-bd29j"] Dec 11 09:33:26 crc kubenswrapper[4992]: E1211 09:33:26.129798 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" containerName="extract-utilities" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.129810 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" containerName="extract-utilities" Dec 11 09:33:26 crc kubenswrapper[4992]: E1211 09:33:26.129824 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerName="extract-content" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.129829 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerName="extract-content" Dec 11 09:33:26 crc kubenswrapper[4992]: E1211 09:33:26.129843 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" containerName="extract-content" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.129849 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" containerName="extract-content" Dec 11 09:33:26 crc kubenswrapper[4992]: E1211 09:33:26.129867 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" containerName="registry-server" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.129872 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" containerName="registry-server" Dec 11 09:33:26 crc kubenswrapper[4992]: E1211 09:33:26.129889 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d23c00b-c5f8-4eab-9e65-0765282574eb" containerName="gather" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.129895 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d23c00b-c5f8-4eab-9e65-0765282574eb" containerName="gather" Dec 11 09:33:26 crc kubenswrapper[4992]: E1211 09:33:26.129909 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerName="registry-server" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.129915 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerName="registry-server" Dec 11 09:33:26 crc kubenswrapper[4992]: E1211 09:33:26.129928 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerName="extract-utilities" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.129934 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerName="extract-utilities" Dec 11 09:33:26 crc kubenswrapper[4992]: E1211 09:33:26.129944 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d23c00b-c5f8-4eab-9e65-0765282574eb" containerName="copy" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.129950 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d23c00b-c5f8-4eab-9e65-0765282574eb" containerName="copy" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.130114 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e074e0d-067b-4a1b-904d-35cbbc936370" containerName="registry-server" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.130130 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="810cda00-4704-4c51-aa12-4450fdd052c1" containerName="registry-server" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.130147 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d23c00b-c5f8-4eab-9e65-0765282574eb" containerName="copy" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.130162 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d23c00b-c5f8-4eab-9e65-0765282574eb" containerName="gather" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.131096 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.138115 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rcjl7"/"openshift-service-ca.crt" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.138338 4992 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rcjl7"/"kube-root-ca.crt" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.181375 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rcjl7/must-gather-bd29j"] Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.239072 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01174cb-d167-4994-a981-e03257bf5af8-must-gather-output\") pod \"must-gather-bd29j\" (UID: \"f01174cb-d167-4994-a981-e03257bf5af8\") " pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.239273 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8s6t\" (UniqueName: \"kubernetes.io/projected/f01174cb-d167-4994-a981-e03257bf5af8-kube-api-access-j8s6t\") pod \"must-gather-bd29j\" (UID: \"f01174cb-d167-4994-a981-e03257bf5af8\") " pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.340883 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8s6t\" (UniqueName: \"kubernetes.io/projected/f01174cb-d167-4994-a981-e03257bf5af8-kube-api-access-j8s6t\") pod \"must-gather-bd29j\" (UID: \"f01174cb-d167-4994-a981-e03257bf5af8\") " pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.340995 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01174cb-d167-4994-a981-e03257bf5af8-must-gather-output\") pod \"must-gather-bd29j\" (UID: \"f01174cb-d167-4994-a981-e03257bf5af8\") " pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.341531 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01174cb-d167-4994-a981-e03257bf5af8-must-gather-output\") pod \"must-gather-bd29j\" (UID: \"f01174cb-d167-4994-a981-e03257bf5af8\") " pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.374802 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8s6t\" (UniqueName: \"kubernetes.io/projected/f01174cb-d167-4994-a981-e03257bf5af8-kube-api-access-j8s6t\") pod \"must-gather-bd29j\" (UID: \"f01174cb-d167-4994-a981-e03257bf5af8\") " pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.469248 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:33:26 crc kubenswrapper[4992]: I1211 09:33:26.943606 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rcjl7/must-gather-bd29j"] Dec 11 09:33:27 crc kubenswrapper[4992]: I1211 09:33:27.217124 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/must-gather-bd29j" event={"ID":"f01174cb-d167-4994-a981-e03257bf5af8","Type":"ContainerStarted","Data":"07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b"} Dec 11 09:33:27 crc kubenswrapper[4992]: I1211 09:33:27.217176 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/must-gather-bd29j" event={"ID":"f01174cb-d167-4994-a981-e03257bf5af8","Type":"ContainerStarted","Data":"1f2ce843f82061c434e376331c58d493bec00ccb7f982168b8f67bba053ec923"} Dec 11 09:33:28 crc kubenswrapper[4992]: I1211 09:33:28.226531 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/must-gather-bd29j" event={"ID":"f01174cb-d167-4994-a981-e03257bf5af8","Type":"ContainerStarted","Data":"675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da"} Dec 11 09:33:28 crc kubenswrapper[4992]: I1211 09:33:28.252486 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rcjl7/must-gather-bd29j" podStartSLOduration=2.252468163 podStartE2EDuration="2.252468163s" podCreationTimestamp="2025-12-11 09:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:33:28.242550561 +0000 UTC m=+4232.502024497" watchObservedRunningTime="2025-12-11 09:33:28.252468163 +0000 UTC m=+4232.511942089" Dec 11 09:33:30 crc kubenswrapper[4992]: E1211 09:33:30.272021 4992 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.105:33506->38.129.56.105:35079: read tcp 38.129.56.105:33506->38.129.56.105:35079: read: connection reset by peer Dec 11 09:33:30 crc kubenswrapper[4992]: I1211 09:33:30.711846 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rcjl7/crc-debug-hm8lk"] Dec 11 09:33:30 crc kubenswrapper[4992]: I1211 09:33:30.713296 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:33:30 crc kubenswrapper[4992]: I1211 09:33:30.715793 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rcjl7"/"default-dockercfg-bzscf" Dec 11 09:33:30 crc kubenswrapper[4992]: I1211 09:33:30.870948 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvthj\" (UniqueName: \"kubernetes.io/projected/34455e11-752b-40ad-a6f0-a2b19de7b50e-kube-api-access-cvthj\") pod \"crc-debug-hm8lk\" (UID: \"34455e11-752b-40ad-a6f0-a2b19de7b50e\") " pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:33:30 crc kubenswrapper[4992]: I1211 09:33:30.871014 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34455e11-752b-40ad-a6f0-a2b19de7b50e-host\") pod \"crc-debug-hm8lk\" (UID: \"34455e11-752b-40ad-a6f0-a2b19de7b50e\") " pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:33:30 crc kubenswrapper[4992]: I1211 09:33:30.972947 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvthj\" (UniqueName: \"kubernetes.io/projected/34455e11-752b-40ad-a6f0-a2b19de7b50e-kube-api-access-cvthj\") pod \"crc-debug-hm8lk\" (UID: \"34455e11-752b-40ad-a6f0-a2b19de7b50e\") " pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:33:30 crc kubenswrapper[4992]: I1211 09:33:30.973006 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34455e11-752b-40ad-a6f0-a2b19de7b50e-host\") pod \"crc-debug-hm8lk\" (UID: \"34455e11-752b-40ad-a6f0-a2b19de7b50e\") " pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:33:30 crc kubenswrapper[4992]: I1211 09:33:30.973185 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34455e11-752b-40ad-a6f0-a2b19de7b50e-host\") pod \"crc-debug-hm8lk\" (UID: \"34455e11-752b-40ad-a6f0-a2b19de7b50e\") " pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:33:30 crc kubenswrapper[4992]: I1211 09:33:30.992799 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvthj\" (UniqueName: \"kubernetes.io/projected/34455e11-752b-40ad-a6f0-a2b19de7b50e-kube-api-access-cvthj\") pod \"crc-debug-hm8lk\" (UID: \"34455e11-752b-40ad-a6f0-a2b19de7b50e\") " pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:33:31 crc kubenswrapper[4992]: I1211 09:33:31.035519 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:33:31 crc kubenswrapper[4992]: W1211 09:33:31.068274 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34455e11_752b_40ad_a6f0_a2b19de7b50e.slice/crio-922a50cb36ea85497a2d7a5b19948be6fa8187e7e56f7e166269b7ba2f23cd85 WatchSource:0}: Error finding container 922a50cb36ea85497a2d7a5b19948be6fa8187e7e56f7e166269b7ba2f23cd85: Status 404 returned error can't find the container with id 922a50cb36ea85497a2d7a5b19948be6fa8187e7e56f7e166269b7ba2f23cd85 Dec 11 09:33:31 crc kubenswrapper[4992]: I1211 09:33:31.259716 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" event={"ID":"34455e11-752b-40ad-a6f0-a2b19de7b50e","Type":"ContainerStarted","Data":"922a50cb36ea85497a2d7a5b19948be6fa8187e7e56f7e166269b7ba2f23cd85"} Dec 11 09:33:32 crc kubenswrapper[4992]: I1211 09:33:32.277954 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" event={"ID":"34455e11-752b-40ad-a6f0-a2b19de7b50e","Type":"ContainerStarted","Data":"36b679bb20493d184e0ff3774956f3d6e191923c9939b6525855975f6616a501"} Dec 11 09:33:32 crc kubenswrapper[4992]: I1211 09:33:32.295579 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" podStartSLOduration=2.295558046 podStartE2EDuration="2.295558046s" podCreationTimestamp="2025-12-11 09:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:33:32.29203238 +0000 UTC m=+4236.551506306" watchObservedRunningTime="2025-12-11 09:33:32.295558046 +0000 UTC m=+4236.555031972" Dec 11 09:34:07 crc kubenswrapper[4992]: I1211 09:34:07.605965 4992 generic.go:334] "Generic (PLEG): container finished" podID="34455e11-752b-40ad-a6f0-a2b19de7b50e" containerID="36b679bb20493d184e0ff3774956f3d6e191923c9939b6525855975f6616a501" exitCode=0 Dec 11 09:34:07 crc kubenswrapper[4992]: I1211 09:34:07.606054 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" event={"ID":"34455e11-752b-40ad-a6f0-a2b19de7b50e","Type":"ContainerDied","Data":"36b679bb20493d184e0ff3774956f3d6e191923c9939b6525855975f6616a501"} Dec 11 09:34:08 crc kubenswrapper[4992]: I1211 09:34:08.743221 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:34:08 crc kubenswrapper[4992]: I1211 09:34:08.783558 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rcjl7/crc-debug-hm8lk"] Dec 11 09:34:08 crc kubenswrapper[4992]: I1211 09:34:08.795775 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rcjl7/crc-debug-hm8lk"] Dec 11 09:34:08 crc kubenswrapper[4992]: I1211 09:34:08.846165 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvthj\" (UniqueName: \"kubernetes.io/projected/34455e11-752b-40ad-a6f0-a2b19de7b50e-kube-api-access-cvthj\") pod \"34455e11-752b-40ad-a6f0-a2b19de7b50e\" (UID: \"34455e11-752b-40ad-a6f0-a2b19de7b50e\") " Dec 11 09:34:08 crc kubenswrapper[4992]: I1211 09:34:08.846281 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34455e11-752b-40ad-a6f0-a2b19de7b50e-host\") pod \"34455e11-752b-40ad-a6f0-a2b19de7b50e\" (UID: \"34455e11-752b-40ad-a6f0-a2b19de7b50e\") " Dec 11 09:34:08 crc kubenswrapper[4992]: I1211 09:34:08.846911 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34455e11-752b-40ad-a6f0-a2b19de7b50e-host" (OuterVolumeSpecName: "host") pod "34455e11-752b-40ad-a6f0-a2b19de7b50e" (UID: "34455e11-752b-40ad-a6f0-a2b19de7b50e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:34:08 crc kubenswrapper[4992]: I1211 09:34:08.861889 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34455e11-752b-40ad-a6f0-a2b19de7b50e-kube-api-access-cvthj" (OuterVolumeSpecName: "kube-api-access-cvthj") pod "34455e11-752b-40ad-a6f0-a2b19de7b50e" (UID: "34455e11-752b-40ad-a6f0-a2b19de7b50e"). InnerVolumeSpecName "kube-api-access-cvthj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:34:08 crc kubenswrapper[4992]: I1211 09:34:08.949186 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvthj\" (UniqueName: \"kubernetes.io/projected/34455e11-752b-40ad-a6f0-a2b19de7b50e-kube-api-access-cvthj\") on node \"crc\" DevicePath \"\"" Dec 11 09:34:08 crc kubenswrapper[4992]: I1211 09:34:08.949228 4992 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/34455e11-752b-40ad-a6f0-a2b19de7b50e-host\") on node \"crc\" DevicePath \"\"" Dec 11 09:34:09 crc kubenswrapper[4992]: I1211 09:34:09.626995 4992 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="922a50cb36ea85497a2d7a5b19948be6fa8187e7e56f7e166269b7ba2f23cd85" Dec 11 09:34:09 crc kubenswrapper[4992]: I1211 09:34:09.627057 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-hm8lk" Dec 11 09:34:09 crc kubenswrapper[4992]: I1211 09:34:09.937400 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rcjl7/crc-debug-sdvtx"] Dec 11 09:34:09 crc kubenswrapper[4992]: E1211 09:34:09.937893 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34455e11-752b-40ad-a6f0-a2b19de7b50e" containerName="container-00" Dec 11 09:34:09 crc kubenswrapper[4992]: I1211 09:34:09.937908 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="34455e11-752b-40ad-a6f0-a2b19de7b50e" containerName="container-00" Dec 11 09:34:09 crc kubenswrapper[4992]: I1211 09:34:09.938091 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="34455e11-752b-40ad-a6f0-a2b19de7b50e" containerName="container-00" Dec 11 09:34:09 crc kubenswrapper[4992]: I1211 09:34:09.938704 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:09 crc kubenswrapper[4992]: I1211 09:34:09.940269 4992 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rcjl7"/"default-dockercfg-bzscf" Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.070139 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdr98\" (UniqueName: \"kubernetes.io/projected/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-kube-api-access-rdr98\") pod \"crc-debug-sdvtx\" (UID: \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\") " pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.070350 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-host\") pod \"crc-debug-sdvtx\" (UID: \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\") " pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.129540 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34455e11-752b-40ad-a6f0-a2b19de7b50e" path="/var/lib/kubelet/pods/34455e11-752b-40ad-a6f0-a2b19de7b50e/volumes" Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.172202 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdr98\" (UniqueName: \"kubernetes.io/projected/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-kube-api-access-rdr98\") pod \"crc-debug-sdvtx\" (UID: \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\") " pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.172648 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-host\") pod \"crc-debug-sdvtx\" (UID: \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\") " pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.172869 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-host\") pod \"crc-debug-sdvtx\" (UID: \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\") " pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.196966 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdr98\" (UniqueName: \"kubernetes.io/projected/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-kube-api-access-rdr98\") pod \"crc-debug-sdvtx\" (UID: \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\") " pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.260073 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.637426 4992 generic.go:334] "Generic (PLEG): container finished" podID="db2eba0a-a2cc-4ec8-910c-579a6ef40f5b" containerID="1d01639c5dce39307928663494af800cf55109d2d431a87a740f2313272a8e3d" exitCode=0 Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.637828 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" event={"ID":"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b","Type":"ContainerDied","Data":"1d01639c5dce39307928663494af800cf55109d2d431a87a740f2313272a8e3d"} Dec 11 09:34:10 crc kubenswrapper[4992]: I1211 09:34:10.637862 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" event={"ID":"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b","Type":"ContainerStarted","Data":"d0d1a508648b66cdff9a288a15b515521c824440ce47ce3e7f89d29c18e3f8fa"} Dec 11 09:34:11 crc kubenswrapper[4992]: I1211 09:34:11.125315 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rcjl7/crc-debug-sdvtx"] Dec 11 09:34:11 crc kubenswrapper[4992]: I1211 09:34:11.133876 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rcjl7/crc-debug-sdvtx"] Dec 11 09:34:11 crc kubenswrapper[4992]: I1211 09:34:11.751440 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:11 crc kubenswrapper[4992]: I1211 09:34:11.905382 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdr98\" (UniqueName: \"kubernetes.io/projected/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-kube-api-access-rdr98\") pod \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\" (UID: \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\") " Dec 11 09:34:11 crc kubenswrapper[4992]: I1211 09:34:11.905516 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-host\") pod \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\" (UID: \"db2eba0a-a2cc-4ec8-910c-579a6ef40f5b\") " Dec 11 09:34:11 crc kubenswrapper[4992]: I1211 09:34:11.905867 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-host" (OuterVolumeSpecName: "host") pod "db2eba0a-a2cc-4ec8-910c-579a6ef40f5b" (UID: "db2eba0a-a2cc-4ec8-910c-579a6ef40f5b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:34:11 crc kubenswrapper[4992]: I1211 09:34:11.906011 4992 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-host\") on node \"crc\" DevicePath \"\"" Dec 11 09:34:11 crc kubenswrapper[4992]: I1211 09:34:11.910887 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-kube-api-access-rdr98" (OuterVolumeSpecName: "kube-api-access-rdr98") pod "db2eba0a-a2cc-4ec8-910c-579a6ef40f5b" (UID: "db2eba0a-a2cc-4ec8-910c-579a6ef40f5b"). InnerVolumeSpecName "kube-api-access-rdr98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.007472 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdr98\" (UniqueName: \"kubernetes.io/projected/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b-kube-api-access-rdr98\") on node \"crc\" DevicePath \"\"" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.113148 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db2eba0a-a2cc-4ec8-910c-579a6ef40f5b" path="/var/lib/kubelet/pods/db2eba0a-a2cc-4ec8-910c-579a6ef40f5b/volumes" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.298224 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rcjl7/crc-debug-85t55"] Dec 11 09:34:12 crc kubenswrapper[4992]: E1211 09:34:12.298827 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2eba0a-a2cc-4ec8-910c-579a6ef40f5b" containerName="container-00" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.298845 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2eba0a-a2cc-4ec8-910c-579a6ef40f5b" containerName="container-00" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.299136 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2eba0a-a2cc-4ec8-910c-579a6ef40f5b" containerName="container-00" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.299956 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.414807 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/398c83c7-84b4-4daf-9a47-42cb99823860-host\") pod \"crc-debug-85t55\" (UID: \"398c83c7-84b4-4daf-9a47-42cb99823860\") " pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.414877 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phd5t\" (UniqueName: \"kubernetes.io/projected/398c83c7-84b4-4daf-9a47-42cb99823860-kube-api-access-phd5t\") pod \"crc-debug-85t55\" (UID: \"398c83c7-84b4-4daf-9a47-42cb99823860\") " pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.516090 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/398c83c7-84b4-4daf-9a47-42cb99823860-host\") pod \"crc-debug-85t55\" (UID: \"398c83c7-84b4-4daf-9a47-42cb99823860\") " pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.516175 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phd5t\" (UniqueName: \"kubernetes.io/projected/398c83c7-84b4-4daf-9a47-42cb99823860-kube-api-access-phd5t\") pod \"crc-debug-85t55\" (UID: \"398c83c7-84b4-4daf-9a47-42cb99823860\") " pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.516489 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/398c83c7-84b4-4daf-9a47-42cb99823860-host\") pod \"crc-debug-85t55\" (UID: \"398c83c7-84b4-4daf-9a47-42cb99823860\") " pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.536021 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phd5t\" (UniqueName: \"kubernetes.io/projected/398c83c7-84b4-4daf-9a47-42cb99823860-kube-api-access-phd5t\") pod \"crc-debug-85t55\" (UID: \"398c83c7-84b4-4daf-9a47-42cb99823860\") " pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.618279 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:12 crc kubenswrapper[4992]: W1211 09:34:12.657082 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod398c83c7_84b4_4daf_9a47_42cb99823860.slice/crio-5e295a00c3fbe2043225a7d7322de22fd10ead75833e7a6028522bc7d4c98979 WatchSource:0}: Error finding container 5e295a00c3fbe2043225a7d7322de22fd10ead75833e7a6028522bc7d4c98979: Status 404 returned error can't find the container with id 5e295a00c3fbe2043225a7d7322de22fd10ead75833e7a6028522bc7d4c98979 Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.659449 4992 scope.go:117] "RemoveContainer" containerID="1d01639c5dce39307928663494af800cf55109d2d431a87a740f2313272a8e3d" Dec 11 09:34:12 crc kubenswrapper[4992]: I1211 09:34:12.659474 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-sdvtx" Dec 11 09:34:13 crc kubenswrapper[4992]: I1211 09:34:13.685041 4992 generic.go:334] "Generic (PLEG): container finished" podID="398c83c7-84b4-4daf-9a47-42cb99823860" containerID="b42288fc4e3eafc74d42e27cfc19db947070889daa7878dc62c803e959bb9d6d" exitCode=0 Dec 11 09:34:13 crc kubenswrapper[4992]: I1211 09:34:13.685124 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/crc-debug-85t55" event={"ID":"398c83c7-84b4-4daf-9a47-42cb99823860","Type":"ContainerDied","Data":"b42288fc4e3eafc74d42e27cfc19db947070889daa7878dc62c803e959bb9d6d"} Dec 11 09:34:13 crc kubenswrapper[4992]: I1211 09:34:13.685372 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/crc-debug-85t55" event={"ID":"398c83c7-84b4-4daf-9a47-42cb99823860","Type":"ContainerStarted","Data":"5e295a00c3fbe2043225a7d7322de22fd10ead75833e7a6028522bc7d4c98979"} Dec 11 09:34:13 crc kubenswrapper[4992]: I1211 09:34:13.726653 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rcjl7/crc-debug-85t55"] Dec 11 09:34:13 crc kubenswrapper[4992]: I1211 09:34:13.737733 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rcjl7/crc-debug-85t55"] Dec 11 09:34:14 crc kubenswrapper[4992]: I1211 09:34:14.811422 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:14 crc kubenswrapper[4992]: I1211 09:34:14.959666 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/398c83c7-84b4-4daf-9a47-42cb99823860-host\") pod \"398c83c7-84b4-4daf-9a47-42cb99823860\" (UID: \"398c83c7-84b4-4daf-9a47-42cb99823860\") " Dec 11 09:34:14 crc kubenswrapper[4992]: I1211 09:34:14.959731 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phd5t\" (UniqueName: \"kubernetes.io/projected/398c83c7-84b4-4daf-9a47-42cb99823860-kube-api-access-phd5t\") pod \"398c83c7-84b4-4daf-9a47-42cb99823860\" (UID: \"398c83c7-84b4-4daf-9a47-42cb99823860\") " Dec 11 09:34:14 crc kubenswrapper[4992]: I1211 09:34:14.959836 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/398c83c7-84b4-4daf-9a47-42cb99823860-host" (OuterVolumeSpecName: "host") pod "398c83c7-84b4-4daf-9a47-42cb99823860" (UID: "398c83c7-84b4-4daf-9a47-42cb99823860"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:34:14 crc kubenswrapper[4992]: I1211 09:34:14.960295 4992 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/398c83c7-84b4-4daf-9a47-42cb99823860-host\") on node \"crc\" DevicePath \"\"" Dec 11 09:34:14 crc kubenswrapper[4992]: I1211 09:34:14.967864 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/398c83c7-84b4-4daf-9a47-42cb99823860-kube-api-access-phd5t" (OuterVolumeSpecName: "kube-api-access-phd5t") pod "398c83c7-84b4-4daf-9a47-42cb99823860" (UID: "398c83c7-84b4-4daf-9a47-42cb99823860"). InnerVolumeSpecName "kube-api-access-phd5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:34:15 crc kubenswrapper[4992]: I1211 09:34:15.062228 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phd5t\" (UniqueName: \"kubernetes.io/projected/398c83c7-84b4-4daf-9a47-42cb99823860-kube-api-access-phd5t\") on node \"crc\" DevicePath \"\"" Dec 11 09:34:15 crc kubenswrapper[4992]: I1211 09:34:15.704363 4992 scope.go:117] "RemoveContainer" containerID="b42288fc4e3eafc74d42e27cfc19db947070889daa7878dc62c803e959bb9d6d" Dec 11 09:34:15 crc kubenswrapper[4992]: I1211 09:34:15.704392 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/crc-debug-85t55" Dec 11 09:34:16 crc kubenswrapper[4992]: I1211 09:34:16.106699 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="398c83c7-84b4-4daf-9a47-42cb99823860" path="/var/lib/kubelet/pods/398c83c7-84b4-4daf-9a47-42cb99823860/volumes" Dec 11 09:34:37 crc kubenswrapper[4992]: I1211 09:34:37.904301 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-859586f498-26phb_4282024f-9d71-4b55-aa65-b0a91e76da62/barbican-api/0.log" Dec 11 09:34:37 crc kubenswrapper[4992]: I1211 09:34:37.975249 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-859586f498-26phb_4282024f-9d71-4b55-aa65-b0a91e76da62/barbican-api-log/0.log" Dec 11 09:34:38 crc kubenswrapper[4992]: I1211 09:34:38.085490 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67cb46677b-w6zfw_a69c55bb-ed74-4b63-a8b6-713b08b1dcb4/barbican-keystone-listener/0.log" Dec 11 09:34:38 crc kubenswrapper[4992]: I1211 09:34:38.131204 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-67cb46677b-w6zfw_a69c55bb-ed74-4b63-a8b6-713b08b1dcb4/barbican-keystone-listener-log/0.log" Dec 11 09:34:38 crc kubenswrapper[4992]: I1211 09:34:38.283258 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76f54b9b99-lv6z6_8cdf31db-16c4-4bfc-bb50-27a283b61abd/barbican-worker/0.log" Dec 11 09:34:38 crc kubenswrapper[4992]: I1211 09:34:38.325432 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76f54b9b99-lv6z6_8cdf31db-16c4-4bfc-bb50-27a283b61abd/barbican-worker-log/0.log" Dec 11 09:34:38 crc kubenswrapper[4992]: I1211 09:34:38.525723 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-v42l7_6066a587-fdc9-4ae8-ad82-4ddf1844f9e6/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:38 crc kubenswrapper[4992]: I1211 09:34:38.572475 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c569a72-7d96-4212-b681-f0d5a0c19c61/ceilometer-central-agent/0.log" Dec 11 09:34:39 crc kubenswrapper[4992]: I1211 09:34:39.309338 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c569a72-7d96-4212-b681-f0d5a0c19c61/ceilometer-notification-agent/0.log" Dec 11 09:34:39 crc kubenswrapper[4992]: I1211 09:34:39.339316 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c569a72-7d96-4212-b681-f0d5a0c19c61/proxy-httpd/0.log" Dec 11 09:34:39 crc kubenswrapper[4992]: I1211 09:34:39.344452 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2c569a72-7d96-4212-b681-f0d5a0c19c61/sg-core/0.log" Dec 11 09:34:39 crc kubenswrapper[4992]: I1211 09:34:39.562973 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bca5ca17-107c-4c9b-8901-dcf3f962e927/cinder-api-log/0.log" Dec 11 09:34:39 crc kubenswrapper[4992]: I1211 09:34:39.567524 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bca5ca17-107c-4c9b-8901-dcf3f962e927/cinder-api/0.log" Dec 11 09:34:39 crc kubenswrapper[4992]: I1211 09:34:39.703659 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e65777ee-c1c8-48f5-a103-539738e7c293/cinder-scheduler/0.log" Dec 11 09:34:39 crc kubenswrapper[4992]: I1211 09:34:39.778898 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e65777ee-c1c8-48f5-a103-539738e7c293/probe/0.log" Dec 11 09:34:39 crc kubenswrapper[4992]: I1211 09:34:39.819107 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-scqpq_8f3e0555-dd40-4a68-bcd1-df6fb4be45df/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:39 crc kubenswrapper[4992]: I1211 09:34:39.970485 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-mxttm_e80e1d51-3960-4957-95e0-987fc9b78120/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:40 crc kubenswrapper[4992]: I1211 09:34:40.055474 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-cpgkh_7e8bbdf3-3509-4a6d-a1c4-decafb575016/init/0.log" Dec 11 09:34:40 crc kubenswrapper[4992]: I1211 09:34:40.233467 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-cpgkh_7e8bbdf3-3509-4a6d-a1c4-decafb575016/init/0.log" Dec 11 09:34:40 crc kubenswrapper[4992]: I1211 09:34:40.282751 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-cpgkh_7e8bbdf3-3509-4a6d-a1c4-decafb575016/dnsmasq-dns/0.log" Dec 11 09:34:40 crc kubenswrapper[4992]: I1211 09:34:40.307173 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bswcc_6af1f9e3-e349-40a7-8985-f114c1c808b3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:40 crc kubenswrapper[4992]: I1211 09:34:40.955484 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_112fb236-1ef9-4991-b83c-91c1081483fc/glance-httpd/0.log" Dec 11 09:34:40 crc kubenswrapper[4992]: I1211 09:34:40.978449 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_112fb236-1ef9-4991-b83c-91c1081483fc/glance-log/0.log" Dec 11 09:34:41 crc kubenswrapper[4992]: I1211 09:34:41.137826 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b8264506-1cab-488a-903d-43a6062db6ae/glance-httpd/0.log" Dec 11 09:34:41 crc kubenswrapper[4992]: I1211 09:34:41.163895 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b8264506-1cab-488a-903d-43a6062db6ae/glance-log/0.log" Dec 11 09:34:41 crc kubenswrapper[4992]: I1211 09:34:41.270244 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c6ddf9d4-c2dtv_1d82648a-9f40-4a60-8532-ec3617de1f45/horizon/0.log" Dec 11 09:34:41 crc kubenswrapper[4992]: I1211 09:34:41.494019 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-59bbf_7243a4bc-1d82-40f0-b28f-f6a181d3771b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:41 crc kubenswrapper[4992]: I1211 09:34:41.576553 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-v4q2c_865a4175-ac8e-43c9-ab29-824386311e22/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:41 crc kubenswrapper[4992]: I1211 09:34:41.698849 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5c6ddf9d4-c2dtv_1d82648a-9f40-4a60-8532-ec3617de1f45/horizon-log/0.log" Dec 11 09:34:41 crc kubenswrapper[4992]: I1211 09:34:41.807499 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29424061-lk7db_7e48de49-fca3-4449-876a-2fafff903b2e/keystone-cron/0.log" Dec 11 09:34:41 crc kubenswrapper[4992]: I1211 09:34:41.877278 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7ccd8c54fd-6rk8g_10274b54-502d-49df-a610-a6b7cddcce42/keystone-api/0.log" Dec 11 09:34:41 crc kubenswrapper[4992]: I1211 09:34:41.957265 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_c46c3920-de00-4d05-9a50-406b7efd3b8d/kube-state-metrics/0.log" Dec 11 09:34:42 crc kubenswrapper[4992]: I1211 09:34:42.043009 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-r5lln_c8aeb03b-f704-4b27-8eb5-afeac15bcd18/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:42 crc kubenswrapper[4992]: I1211 09:34:42.355420 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7978c485bf-hpg7n_04b4ce41-af3f-42d1-a340-e3d20519f217/neutron-httpd/0.log" Dec 11 09:34:42 crc kubenswrapper[4992]: I1211 09:34:42.359044 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7978c485bf-hpg7n_04b4ce41-af3f-42d1-a340-e3d20519f217/neutron-api/0.log" Dec 11 09:34:42 crc kubenswrapper[4992]: I1211 09:34:42.487598 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-d8cml_dd42fab7-63a0-4b66-8264-335d337ec7b3/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:42 crc kubenswrapper[4992]: I1211 09:34:42.999100 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_baafc0d4-8327-40d2-a00b-27c7388b64bf/nova-api-log/0.log" Dec 11 09:34:43 crc kubenswrapper[4992]: I1211 09:34:43.103593 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dcb13d3b-6f5a-432f-a32f-80fbf81c6adf/nova-cell0-conductor-conductor/0.log" Dec 11 09:34:43 crc kubenswrapper[4992]: I1211 09:34:43.400251 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f22ebfe6-970d-45f1-b5e1-c85baf4c7dbc/nova-cell1-conductor-conductor/0.log" Dec 11 09:34:43 crc kubenswrapper[4992]: I1211 09:34:43.474669 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_34747d0d-221e-453b-9685-2e0ce24f21ff/nova-cell1-novncproxy-novncproxy/0.log" Dec 11 09:34:43 crc kubenswrapper[4992]: I1211 09:34:43.513890 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_baafc0d4-8327-40d2-a00b-27c7388b64bf/nova-api-api/0.log" Dec 11 09:34:43 crc kubenswrapper[4992]: I1211 09:34:43.625436 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gnrls_b236958b-e08b-46ec-9e79-772bcb3d6d14/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:43 crc kubenswrapper[4992]: I1211 09:34:43.758661 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e/nova-metadata-log/0.log" Dec 11 09:34:44 crc kubenswrapper[4992]: I1211 09:34:44.164592 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c5eb79c-8f1c-4416-ab38-00b67e0b3f86/mysql-bootstrap/0.log" Dec 11 09:34:44 crc kubenswrapper[4992]: I1211 09:34:44.190131 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_5cd2254a-360d-4e10-8185-12ef58a09c9b/nova-scheduler-scheduler/0.log" Dec 11 09:34:44 crc kubenswrapper[4992]: I1211 09:34:44.274917 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c5eb79c-8f1c-4416-ab38-00b67e0b3f86/mysql-bootstrap/0.log" Dec 11 09:34:44 crc kubenswrapper[4992]: I1211 09:34:44.376323 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c5eb79c-8f1c-4416-ab38-00b67e0b3f86/galera/0.log" Dec 11 09:34:44 crc kubenswrapper[4992]: I1211 09:34:44.499360 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67/mysql-bootstrap/0.log" Dec 11 09:34:44 crc kubenswrapper[4992]: I1211 09:34:44.648805 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67/mysql-bootstrap/0.log" Dec 11 09:34:44 crc kubenswrapper[4992]: I1211 09:34:44.773548 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e9cb8c6e-1bff-4d44-b4ee-f91f285f4f67/galera/0.log" Dec 11 09:34:44 crc kubenswrapper[4992]: I1211 09:34:44.840213 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_421fdf51-5a39-4d80-b066-a715006c2f85/openstackclient/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.017475 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-djp6h_43b8eb34-f000-49af-bcf9-7507f85afd2b/ovn-controller/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.143827 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cc6d0cc7-fab7-4c05-b634-d8ad75d7e89e/nova-metadata-metadata/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.218107 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qw2v9_25c315d3-3609-4a88-bf95-4beedb848ecf/openstack-network-exporter/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.277266 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sw28r_9698b65a-4246-466e-aac8-e7fe29c4063d/ovsdb-server-init/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.484196 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sw28r_9698b65a-4246-466e-aac8-e7fe29c4063d/ovsdb-server-init/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.506321 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sw28r_9698b65a-4246-466e-aac8-e7fe29c4063d/ovs-vswitchd/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.526597 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-sw28r_9698b65a-4246-466e-aac8-e7fe29c4063d/ovsdb-server/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.722268 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-slpxr_a47b817b-7906-4327-ba35-740815f4c02c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.743423 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5b008aff-f3e4-46b6-a5ff-52e0d80374d8/openstack-network-exporter/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.838242 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5b008aff-f3e4-46b6-a5ff-52e0d80374d8/ovn-northd/0.log" Dec 11 09:34:45 crc kubenswrapper[4992]: I1211 09:34:45.919244 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c51bf698-2728-4a49-b7e1-d80c304725e2/openstack-network-exporter/0.log" Dec 11 09:34:46 crc kubenswrapper[4992]: I1211 09:34:46.015454 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c51bf698-2728-4a49-b7e1-d80c304725e2/ovsdbserver-nb/0.log" Dec 11 09:34:46 crc kubenswrapper[4992]: I1211 09:34:46.184426 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a84aae8d-da28-42b4-80a4-99e157fb57ec/openstack-network-exporter/0.log" Dec 11 09:34:46 crc kubenswrapper[4992]: I1211 09:34:46.284123 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a84aae8d-da28-42b4-80a4-99e157fb57ec/ovsdbserver-sb/0.log" Dec 11 09:34:46 crc kubenswrapper[4992]: I1211 09:34:46.468617 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74db564d44-wj6gh_1215b406-66dc-4132-a0ea-76010ee7b44d/placement-api/0.log" Dec 11 09:34:46 crc kubenswrapper[4992]: I1211 09:34:46.515310 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-74db564d44-wj6gh_1215b406-66dc-4132-a0ea-76010ee7b44d/placement-log/0.log" Dec 11 09:34:46 crc kubenswrapper[4992]: I1211 09:34:46.544315 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b10485db-da0e-493a-ad33-82634346be84/setup-container/0.log" Dec 11 09:34:46 crc kubenswrapper[4992]: I1211 09:34:46.795250 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b10485db-da0e-493a-ad33-82634346be84/rabbitmq/0.log" Dec 11 09:34:46 crc kubenswrapper[4992]: I1211 09:34:46.806720 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b10485db-da0e-493a-ad33-82634346be84/setup-container/0.log" Dec 11 09:34:46 crc kubenswrapper[4992]: I1211 09:34:46.881538 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_614cd874-917b-4851-b702-cfb170fcec4d/setup-container/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.069809 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_614cd874-917b-4851-b702-cfb170fcec4d/setup-container/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.072444 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_614cd874-917b-4851-b702-cfb170fcec4d/rabbitmq/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.265284 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5mhkv_57455273-3fb5-408e-a80c-c42880a6b0bf/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.325423 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-nwx9k_ff4798f8-6563-4d95-ab98-252c0417f16f/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.477534 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lqw8n_540586d6-4da9-4c8e-9866-dbe51de9f643/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.548856 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bb5gt_daffe0c7-1479-4565-9035-46e508469995/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.654766 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qd5rv_b8616302-c54e-49e1-98cb-924b70e8050f/ssh-known-hosts-edpm-deployment/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.842262 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d94866685-kpw9g_f10bd5d3-8ab6-4950-96e9-b683e47619ea/proxy-server/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.855474 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d94866685-kpw9g_f10bd5d3-8ab6-4950-96e9-b683e47619ea/proxy-httpd/0.log" Dec 11 09:34:47 crc kubenswrapper[4992]: I1211 09:34:47.982904 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dpvlt_95883dfb-ad1a-4d13-889e-4b9f73ded332/swift-ring-rebalance/0.log" Dec 11 09:34:48 crc kubenswrapper[4992]: I1211 09:34:48.207097 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/account-auditor/0.log" Dec 11 09:34:48 crc kubenswrapper[4992]: I1211 09:34:48.383528 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/account-reaper/0.log" Dec 11 09:34:48 crc kubenswrapper[4992]: I1211 09:34:48.395135 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/account-replicator/0.log" Dec 11 09:34:48 crc kubenswrapper[4992]: I1211 09:34:48.424576 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/account-server/0.log" Dec 11 09:34:48 crc kubenswrapper[4992]: I1211 09:34:48.444602 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/container-auditor/0.log" Dec 11 09:34:48 crc kubenswrapper[4992]: I1211 09:34:48.567212 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/container-replicator/0.log" Dec 11 09:34:48 crc kubenswrapper[4992]: I1211 09:34:48.603877 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/container-updater/0.log" Dec 11 09:34:48 crc kubenswrapper[4992]: I1211 09:34:48.603994 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/container-server/0.log" Dec 11 09:34:48 crc kubenswrapper[4992]: I1211 09:34:48.636488 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-auditor/0.log" Dec 11 09:34:49 crc kubenswrapper[4992]: I1211 09:34:49.363155 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-expirer/0.log" Dec 11 09:34:49 crc kubenswrapper[4992]: I1211 09:34:49.404623 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-updater/0.log" Dec 11 09:34:49 crc kubenswrapper[4992]: I1211 09:34:49.457848 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-replicator/0.log" Dec 11 09:34:49 crc kubenswrapper[4992]: I1211 09:34:49.478646 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/object-server/0.log" Dec 11 09:34:49 crc kubenswrapper[4992]: I1211 09:34:49.664306 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/rsync/0.log" Dec 11 09:34:49 crc kubenswrapper[4992]: I1211 09:34:49.678879 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_47851b57-2a65-4a8a-b2a2-f01a5a2d7833/swift-recon-cron/0.log" Dec 11 09:34:49 crc kubenswrapper[4992]: I1211 09:34:49.778619 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qdq6m_35d02426-6179-4c35-8e14-f1e06f6684f6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:34:49 crc kubenswrapper[4992]: I1211 09:34:49.920049 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_79d2a033-d073-439d-8d2c-779b95da30f4/tempest-tests-tempest-tests-runner/0.log" Dec 11 09:34:50 crc kubenswrapper[4992]: I1211 09:34:50.043679 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_216b8df4-225b-4a21-abf4-23a79bab418a/test-operator-logs-container/0.log" Dec 11 09:34:50 crc kubenswrapper[4992]: I1211 09:34:50.723672 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mbsz8_f5fefa11-ffb3-491d-90e5-c957a37896ef/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 09:35:03 crc kubenswrapper[4992]: I1211 09:35:03.679250 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_a9e5a806-cf0a-4149-81d7-803170a48b0e/memcached/0.log" Dec 11 09:35:05 crc kubenswrapper[4992]: I1211 09:35:05.379255 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:35:05 crc kubenswrapper[4992]: I1211 09:35:05.379311 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:35:16 crc kubenswrapper[4992]: I1211 09:35:16.468013 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/util/0.log" Dec 11 09:35:16 crc kubenswrapper[4992]: I1211 09:35:16.648250 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/pull/0.log" Dec 11 09:35:16 crc kubenswrapper[4992]: I1211 09:35:16.649356 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/pull/0.log" Dec 11 09:35:16 crc kubenswrapper[4992]: I1211 09:35:16.649755 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/util/0.log" Dec 11 09:35:16 crc kubenswrapper[4992]: I1211 09:35:16.892878 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/pull/0.log" Dec 11 09:35:16 crc kubenswrapper[4992]: I1211 09:35:16.915388 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/util/0.log" Dec 11 09:35:16 crc kubenswrapper[4992]: I1211 09:35:16.922589 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_2ada1c3f0fe62aa4f437dd1154f09b1b759ad23fe095da957c33419ca6mnb8n_6fef3605-0f1b-4298-b43e-13d68847c03f/extract/0.log" Dec 11 09:35:17 crc kubenswrapper[4992]: I1211 09:35:17.137460 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-l5mc5_a84e6e65-9f83-405a-a478-a53e125d5845/kube-rbac-proxy/0.log" Dec 11 09:35:17 crc kubenswrapper[4992]: I1211 09:35:17.143391 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-l5mc5_a84e6e65-9f83-405a-a478-a53e125d5845/manager/0.log" Dec 11 09:35:17 crc kubenswrapper[4992]: I1211 09:35:17.634191 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-55h47_07859dd8-8995-4214-8ee9-6648fa5a292e/kube-rbac-proxy/0.log" Dec 11 09:35:17 crc kubenswrapper[4992]: I1211 09:35:17.683379 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-55h47_07859dd8-8995-4214-8ee9-6648fa5a292e/manager/0.log" Dec 11 09:35:17 crc kubenswrapper[4992]: I1211 09:35:17.710117 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-6gl95_a39ad598-19ba-42cb-9f35-538b68de7b04/kube-rbac-proxy/0.log" Dec 11 09:35:17 crc kubenswrapper[4992]: I1211 09:35:17.795891 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-6gl95_a39ad598-19ba-42cb-9f35-538b68de7b04/manager/0.log" Dec 11 09:35:17 crc kubenswrapper[4992]: I1211 09:35:17.901033 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-bdp4x_42106600-d00d-477a-aaec-102ba03cb5c6/kube-rbac-proxy/0.log" Dec 11 09:35:17 crc kubenswrapper[4992]: I1211 09:35:17.950098 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-bdp4x_42106600-d00d-477a-aaec-102ba03cb5c6/manager/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.071535 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-hsldx_e501b125-ca5e-41f0-88c1-a9fda63de236/kube-rbac-proxy/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.124781 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-hsldx_e501b125-ca5e-41f0-88c1-a9fda63de236/manager/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.188260 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-jqlq7_b4d8b09b-a162-43bd-a91f-dc87e5c9c956/kube-rbac-proxy/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.297785 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-jqlq7_b4d8b09b-a162-43bd-a91f-dc87e5c9c956/manager/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.350847 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-hd9fc_fc892dae-199a-49ca-8ddd-863a6b8426d7/kube-rbac-proxy/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.580825 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-r792s_2a91887f-977b-43dd-b638-0391348bf5d7/kube-rbac-proxy/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.583076 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-r792s_2a91887f-977b-43dd-b638-0391348bf5d7/manager/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.697503 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-hd9fc_fc892dae-199a-49ca-8ddd-863a6b8426d7/manager/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.764458 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-l6gnl_74f7d667-67f0-459b-a7a0-f46c0e095485/kube-rbac-proxy/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.866162 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-l6gnl_74f7d667-67f0-459b-a7a0-f46c0e095485/manager/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.949976 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-2rs5h_995a7c64-c843-4200-b1cf-9fe6d774f457/manager/0.log" Dec 11 09:35:18 crc kubenswrapper[4992]: I1211 09:35:18.999699 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-2rs5h_995a7c64-c843-4200-b1cf-9fe6d774f457/kube-rbac-proxy/0.log" Dec 11 09:35:19 crc kubenswrapper[4992]: I1211 09:35:19.757329 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-7nw5b_2b8e6bee-2aae-4689-898a-b298fd5a3d00/manager/0.log" Dec 11 09:35:19 crc kubenswrapper[4992]: I1211 09:35:19.773447 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-7nw5b_2b8e6bee-2aae-4689-898a-b298fd5a3d00/kube-rbac-proxy/0.log" Dec 11 09:35:19 crc kubenswrapper[4992]: I1211 09:35:19.874022 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-rd8j7_aa6fcfad-b39a-4621-aebe-0b48a4106495/kube-rbac-proxy/0.log" Dec 11 09:35:19 crc kubenswrapper[4992]: I1211 09:35:19.965446 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9r2v8_22b393e2-e34e-4f47-a8f8-136d9a6613f6/kube-rbac-proxy/0.log" Dec 11 09:35:19 crc kubenswrapper[4992]: I1211 09:35:19.977659 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-rd8j7_aa6fcfad-b39a-4621-aebe-0b48a4106495/manager/0.log" Dec 11 09:35:20 crc kubenswrapper[4992]: I1211 09:35:20.146183 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-9r2v8_22b393e2-e34e-4f47-a8f8-136d9a6613f6/manager/0.log" Dec 11 09:35:20 crc kubenswrapper[4992]: I1211 09:35:20.153028 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-r9hft_94ff8875-2a35-47c0-8da4-1fcc4fd0836e/manager/0.log" Dec 11 09:35:20 crc kubenswrapper[4992]: I1211 09:35:20.178295 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-r9hft_94ff8875-2a35-47c0-8da4-1fcc4fd0836e/kube-rbac-proxy/0.log" Dec 11 09:35:20 crc kubenswrapper[4992]: I1211 09:35:20.324906 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f5wrjt_24ddc127-1ac3-4dd9-ae14-c133c9ad387b/manager/0.log" Dec 11 09:35:20 crc kubenswrapper[4992]: I1211 09:35:20.359606 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f5wrjt_24ddc127-1ac3-4dd9-ae14-c133c9ad387b/kube-rbac-proxy/0.log" Dec 11 09:35:20 crc kubenswrapper[4992]: I1211 09:35:20.628361 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gzhqj_36fe56e4-3db8-4d2a-8fe4-bfac398f3d92/registry-server/0.log" Dec 11 09:35:20 crc kubenswrapper[4992]: I1211 09:35:20.697753 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-94cdf5849-dv4jk_442cb00a-6225-47e0-a88d-6d615414e5a4/operator/0.log" Dec 11 09:35:20 crc kubenswrapper[4992]: I1211 09:35:20.857842 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-4lp9k_2b843345-399a-41e3-abe0-f7f41682250a/kube-rbac-proxy/0.log" Dec 11 09:35:20 crc kubenswrapper[4992]: I1211 09:35:20.954021 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-4lp9k_2b843345-399a-41e3-abe0-f7f41682250a/manager/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.078388 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-8q2xn_d708dd00-6c6a-4dd0-ac04-e0b57a753f1f/kube-rbac-proxy/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.095185 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-8q2xn_d708dd00-6c6a-4dd0-ac04-e0b57a753f1f/manager/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.225042 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-kxgst_5c6deb1d-64a1-4f75-baaf-3ce6c908b850/operator/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.388546 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-d9jmh_ae97f467-cfd0-46c1-a261-36f09387f3e0/kube-rbac-proxy/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.440801 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-d9jmh_ae97f467-cfd0-46c1-a261-36f09387f3e0/manager/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.506818 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5dfd9f965d-t689z_4c6d188e-81e4-4ba9-a555-5dbda4f39d1d/manager/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.552043 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-l54jq_2e7b36cb-508f-46e0-acd1-6eca36c331b1/kube-rbac-proxy/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.622834 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-l54jq_2e7b36cb-508f-46e0-acd1-6eca36c331b1/manager/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.692545 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-g9552_d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a/manager/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.739805 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-g9552_d2cb37d7-1dea-4b2a-bbb4-6a4b2dfaaa0a/kube-rbac-proxy/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.803507 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rph2p_3635faed-4894-4eb8-94f7-33b055b860c4/kube-rbac-proxy/0.log" Dec 11 09:35:21 crc kubenswrapper[4992]: I1211 09:35:21.894380 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-rph2p_3635faed-4894-4eb8-94f7-33b055b860c4/manager/0.log" Dec 11 09:35:30 crc kubenswrapper[4992]: I1211 09:35:30.969194 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b87jk"] Dec 11 09:35:30 crc kubenswrapper[4992]: E1211 09:35:30.970232 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398c83c7-84b4-4daf-9a47-42cb99823860" containerName="container-00" Dec 11 09:35:30 crc kubenswrapper[4992]: I1211 09:35:30.970249 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="398c83c7-84b4-4daf-9a47-42cb99823860" containerName="container-00" Dec 11 09:35:30 crc kubenswrapper[4992]: I1211 09:35:30.970449 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="398c83c7-84b4-4daf-9a47-42cb99823860" containerName="container-00" Dec 11 09:35:30 crc kubenswrapper[4992]: I1211 09:35:30.971798 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:30 crc kubenswrapper[4992]: I1211 09:35:30.987626 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b87jk"] Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.046759 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-utilities\") pod \"community-operators-b87jk\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.046984 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-catalog-content\") pod \"community-operators-b87jk\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.047107 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99fn\" (UniqueName: \"kubernetes.io/projected/dc65adcd-c691-4128-b41d-b5cc820dfc1d-kube-api-access-v99fn\") pod \"community-operators-b87jk\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.148817 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v99fn\" (UniqueName: \"kubernetes.io/projected/dc65adcd-c691-4128-b41d-b5cc820dfc1d-kube-api-access-v99fn\") pod \"community-operators-b87jk\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.148994 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-utilities\") pod \"community-operators-b87jk\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.149090 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-catalog-content\") pod \"community-operators-b87jk\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.150095 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-catalog-content\") pod \"community-operators-b87jk\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.150120 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-utilities\") pod \"community-operators-b87jk\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.171420 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v99fn\" (UniqueName: \"kubernetes.io/projected/dc65adcd-c691-4128-b41d-b5cc820dfc1d-kube-api-access-v99fn\") pod \"community-operators-b87jk\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.292835 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:31 crc kubenswrapper[4992]: I1211 09:35:31.855111 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b87jk"] Dec 11 09:35:32 crc kubenswrapper[4992]: I1211 09:35:32.411902 4992 generic.go:334] "Generic (PLEG): container finished" podID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerID="dff00985f87a8bc96686864ac59f6221acf2ed36297df03faf8886739a1bd400" exitCode=0 Dec 11 09:35:32 crc kubenswrapper[4992]: I1211 09:35:32.411969 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87jk" event={"ID":"dc65adcd-c691-4128-b41d-b5cc820dfc1d","Type":"ContainerDied","Data":"dff00985f87a8bc96686864ac59f6221acf2ed36297df03faf8886739a1bd400"} Dec 11 09:35:32 crc kubenswrapper[4992]: I1211 09:35:32.412432 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87jk" event={"ID":"dc65adcd-c691-4128-b41d-b5cc820dfc1d","Type":"ContainerStarted","Data":"0227cdce36141ccfe52ee5254ff73a08c96cc9e7770c6d5c635219b27538046c"} Dec 11 09:35:32 crc kubenswrapper[4992]: I1211 09:35:32.415739 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 09:35:34 crc kubenswrapper[4992]: I1211 09:35:34.431266 4992 generic.go:334] "Generic (PLEG): container finished" podID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerID="d25e571991e86dbeab0f743d33bfe42885c241990598ca90f400be86aa9c246c" exitCode=0 Dec 11 09:35:34 crc kubenswrapper[4992]: I1211 09:35:34.431351 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87jk" event={"ID":"dc65adcd-c691-4128-b41d-b5cc820dfc1d","Type":"ContainerDied","Data":"d25e571991e86dbeab0f743d33bfe42885c241990598ca90f400be86aa9c246c"} Dec 11 09:35:35 crc kubenswrapper[4992]: I1211 09:35:35.378339 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:35:35 crc kubenswrapper[4992]: I1211 09:35:35.378685 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:35:35 crc kubenswrapper[4992]: I1211 09:35:35.443431 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87jk" event={"ID":"dc65adcd-c691-4128-b41d-b5cc820dfc1d","Type":"ContainerStarted","Data":"c66b9da1231f859ed84dd17bb5a13e4bbfdc4a6cb1ead367a91ee53fcec0688d"} Dec 11 09:35:40 crc kubenswrapper[4992]: I1211 09:35:40.949415 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9jmlp_f495c66f-c76e-41c7-a70b-71c7a19c8c6a/control-plane-machine-set-operator/0.log" Dec 11 09:35:41 crc kubenswrapper[4992]: I1211 09:35:41.094515 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g5d6r_7f77b180-f28c-472b-a577-44ef5012100c/kube-rbac-proxy/0.log" Dec 11 09:35:41 crc kubenswrapper[4992]: I1211 09:35:41.105774 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-g5d6r_7f77b180-f28c-472b-a577-44ef5012100c/machine-api-operator/0.log" Dec 11 09:35:41 crc kubenswrapper[4992]: I1211 09:35:41.293666 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:41 crc kubenswrapper[4992]: I1211 09:35:41.293751 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:41 crc kubenswrapper[4992]: I1211 09:35:41.343916 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:41 crc kubenswrapper[4992]: I1211 09:35:41.366484 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b87jk" podStartSLOduration=8.793736219 podStartE2EDuration="11.366459585s" podCreationTimestamp="2025-12-11 09:35:30 +0000 UTC" firstStartedPulling="2025-12-11 09:35:32.415461416 +0000 UTC m=+4356.674935342" lastFinishedPulling="2025-12-11 09:35:34.988184782 +0000 UTC m=+4359.247658708" observedRunningTime="2025-12-11 09:35:35.462429178 +0000 UTC m=+4359.721903104" watchObservedRunningTime="2025-12-11 09:35:41.366459585 +0000 UTC m=+4365.625933511" Dec 11 09:35:41 crc kubenswrapper[4992]: I1211 09:35:41.546614 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:41 crc kubenswrapper[4992]: I1211 09:35:41.595158 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b87jk"] Dec 11 09:35:43 crc kubenswrapper[4992]: I1211 09:35:43.512298 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b87jk" podUID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerName="registry-server" containerID="cri-o://c66b9da1231f859ed84dd17bb5a13e4bbfdc4a6cb1ead367a91ee53fcec0688d" gracePeriod=2 Dec 11 09:35:45 crc kubenswrapper[4992]: I1211 09:35:45.540030 4992 generic.go:334] "Generic (PLEG): container finished" podID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerID="c66b9da1231f859ed84dd17bb5a13e4bbfdc4a6cb1ead367a91ee53fcec0688d" exitCode=0 Dec 11 09:35:45 crc kubenswrapper[4992]: I1211 09:35:45.540110 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87jk" event={"ID":"dc65adcd-c691-4128-b41d-b5cc820dfc1d","Type":"ContainerDied","Data":"c66b9da1231f859ed84dd17bb5a13e4bbfdc4a6cb1ead367a91ee53fcec0688d"} Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.275686 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.381283 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-catalog-content\") pod \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.381536 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-utilities\") pod \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.381773 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v99fn\" (UniqueName: \"kubernetes.io/projected/dc65adcd-c691-4128-b41d-b5cc820dfc1d-kube-api-access-v99fn\") pod \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\" (UID: \"dc65adcd-c691-4128-b41d-b5cc820dfc1d\") " Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.383978 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-utilities" (OuterVolumeSpecName: "utilities") pod "dc65adcd-c691-4128-b41d-b5cc820dfc1d" (UID: "dc65adcd-c691-4128-b41d-b5cc820dfc1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.388170 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc65adcd-c691-4128-b41d-b5cc820dfc1d-kube-api-access-v99fn" (OuterVolumeSpecName: "kube-api-access-v99fn") pod "dc65adcd-c691-4128-b41d-b5cc820dfc1d" (UID: "dc65adcd-c691-4128-b41d-b5cc820dfc1d"). InnerVolumeSpecName "kube-api-access-v99fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.430363 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc65adcd-c691-4128-b41d-b5cc820dfc1d" (UID: "dc65adcd-c691-4128-b41d-b5cc820dfc1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.484318 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v99fn\" (UniqueName: \"kubernetes.io/projected/dc65adcd-c691-4128-b41d-b5cc820dfc1d-kube-api-access-v99fn\") on node \"crc\" DevicePath \"\"" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.484356 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.484365 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc65adcd-c691-4128-b41d-b5cc820dfc1d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.552682 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b87jk" event={"ID":"dc65adcd-c691-4128-b41d-b5cc820dfc1d","Type":"ContainerDied","Data":"0227cdce36141ccfe52ee5254ff73a08c96cc9e7770c6d5c635219b27538046c"} Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.552747 4992 scope.go:117] "RemoveContainer" containerID="c66b9da1231f859ed84dd17bb5a13e4bbfdc4a6cb1ead367a91ee53fcec0688d" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.552760 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b87jk" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.575924 4992 scope.go:117] "RemoveContainer" containerID="d25e571991e86dbeab0f743d33bfe42885c241990598ca90f400be86aa9c246c" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.592897 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b87jk"] Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.602456 4992 scope.go:117] "RemoveContainer" containerID="dff00985f87a8bc96686864ac59f6221acf2ed36297df03faf8886739a1bd400" Dec 11 09:35:46 crc kubenswrapper[4992]: I1211 09:35:46.603858 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b87jk"] Dec 11 09:35:48 crc kubenswrapper[4992]: I1211 09:35:48.106982 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" path="/var/lib/kubelet/pods/dc65adcd-c691-4128-b41d-b5cc820dfc1d/volumes" Dec 11 09:35:52 crc kubenswrapper[4992]: I1211 09:35:52.716538 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-wz2kx_02923a4e-c47d-47f9-8a4c-389310df14cb/cert-manager-controller/0.log" Dec 11 09:35:52 crc kubenswrapper[4992]: I1211 09:35:52.896383 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-xsswz_b8d49b68-c215-4f0e-a508-043a98247366/cert-manager-cainjector/0.log" Dec 11 09:35:52 crc kubenswrapper[4992]: I1211 09:35:52.954574 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-f6zvc_f92a18ba-a108-476e-a4f6-d7f4446b860a/cert-manager-webhook/0.log" Dec 11 09:36:04 crc kubenswrapper[4992]: I1211 09:36:04.540264 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-wkx28_65477465-ca8e-4379-bb0a-7940542990f7/nmstate-console-plugin/0.log" Dec 11 09:36:04 crc kubenswrapper[4992]: I1211 09:36:04.675204 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sh9d8_865e017c-606b-41e4-82f7-3e0520607d02/nmstate-handler/0.log" Dec 11 09:36:04 crc kubenswrapper[4992]: I1211 09:36:04.761809 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-6csbm_a098ff29-c757-4eac-b38d-33f5d50e1aea/kube-rbac-proxy/0.log" Dec 11 09:36:04 crc kubenswrapper[4992]: I1211 09:36:04.841102 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-6csbm_a098ff29-c757-4eac-b38d-33f5d50e1aea/nmstate-metrics/0.log" Dec 11 09:36:04 crc kubenswrapper[4992]: I1211 09:36:04.937454 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-mxq6q_2dec2cea-664e-4421-8e41-8c15c02aa08f/nmstate-operator/0.log" Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.028266 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-t6ptn_ba5caba0-f6b6-400d-ab83-b1079de7af46/nmstate-webhook/0.log" Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.378822 4992 patch_prober.go:28] interesting pod/machine-config-daemon-m8b9c container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.378885 4992 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.378932 4992 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.379770 4992 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43"} pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.379843 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerName="machine-config-daemon" containerID="cri-o://28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" gracePeriod=600 Dec 11 09:36:05 crc kubenswrapper[4992]: E1211 09:36:05.515194 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.713561 4992 generic.go:334] "Generic (PLEG): container finished" podID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" exitCode=0 Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.713603 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerDied","Data":"28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43"} Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.713678 4992 scope.go:117] "RemoveContainer" containerID="70a483776bc5c324d5f53833009f43fbeb28d39591e52e9b29cefc41b9c1b645" Dec 11 09:36:05 crc kubenswrapper[4992]: I1211 09:36:05.714449 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:36:05 crc kubenswrapper[4992]: E1211 09:36:05.714900 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:36:19 crc kubenswrapper[4992]: I1211 09:36:19.592197 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-g47kr_8b4f416d-3812-4dc9-8fa4-5667d5f2339b/kube-rbac-proxy/0.log" Dec 11 09:36:19 crc kubenswrapper[4992]: I1211 09:36:19.611194 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-g47kr_8b4f416d-3812-4dc9-8fa4-5667d5f2339b/controller/0.log" Dec 11 09:36:19 crc kubenswrapper[4992]: I1211 09:36:19.783923 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-frr-files/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.095561 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:36:20 crc kubenswrapper[4992]: E1211 09:36:20.095831 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.353891 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-metrics/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.368832 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-frr-files/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.377832 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-reloader/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.407256 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-reloader/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.604078 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-reloader/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.604238 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-metrics/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.614625 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-frr-files/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.653189 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-metrics/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.779505 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-frr-files/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.779553 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-reloader/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.868477 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/cp-metrics/0.log" Dec 11 09:36:20 crc kubenswrapper[4992]: I1211 09:36:20.871697 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/controller/0.log" Dec 11 09:36:21 crc kubenswrapper[4992]: I1211 09:36:21.007532 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/frr-metrics/0.log" Dec 11 09:36:21 crc kubenswrapper[4992]: I1211 09:36:21.079449 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/kube-rbac-proxy/0.log" Dec 11 09:36:21 crc kubenswrapper[4992]: I1211 09:36:21.081050 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/kube-rbac-proxy-frr/0.log" Dec 11 09:36:21 crc kubenswrapper[4992]: I1211 09:36:21.264156 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/reloader/0.log" Dec 11 09:36:21 crc kubenswrapper[4992]: I1211 09:36:21.382547 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-wjv4l_3cb24660-a51b-4701-a0e2-f4edc25d0960/frr-k8s-webhook-server/0.log" Dec 11 09:36:21 crc kubenswrapper[4992]: I1211 09:36:21.644739 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7c6f79466f-zkrkf_48361753-e5d3-4311-b9e0-78de22981923/manager/0.log" Dec 11 09:36:21 crc kubenswrapper[4992]: I1211 09:36:21.761156 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7d64577cd-5nznr_19d7371c-f87c-44c9-868e-636a222d606f/webhook-server/0.log" Dec 11 09:36:21 crc kubenswrapper[4992]: I1211 09:36:21.918704 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lfvx8_07a93a81-2773-49bc-a345-528d2d52dbd6/kube-rbac-proxy/0.log" Dec 11 09:36:22 crc kubenswrapper[4992]: I1211 09:36:22.357301 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-splw9_34a79bdc-5774-468d-9136-9e03be822975/frr/0.log" Dec 11 09:36:22 crc kubenswrapper[4992]: I1211 09:36:22.365599 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lfvx8_07a93a81-2773-49bc-a345-528d2d52dbd6/speaker/0.log" Dec 11 09:36:32 crc kubenswrapper[4992]: I1211 09:36:32.095137 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:36:32 crc kubenswrapper[4992]: E1211 09:36:32.097254 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:36:33 crc kubenswrapper[4992]: I1211 09:36:33.942159 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/util/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.192321 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/pull/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.194454 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/pull/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.228244 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/util/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.410089 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/util/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.441179 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/pull/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.445875 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d46njxn_9776ba1b-3de0-4a08-b43d-f6c5f7487ffe/extract/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.609981 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/util/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.758983 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/pull/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.763764 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/pull/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.775350 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/util/0.log" Dec 11 09:36:34 crc kubenswrapper[4992]: I1211 09:36:34.992043 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/pull/0.log" Dec 11 09:36:35 crc kubenswrapper[4992]: I1211 09:36:35.006237 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/extract/0.log" Dec 11 09:36:35 crc kubenswrapper[4992]: I1211 09:36:35.022179 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8jfcgr_12e967da-ba1b-419e-ae17-80b2f60a3300/util/0.log" Dec 11 09:36:35 crc kubenswrapper[4992]: I1211 09:36:35.162077 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-utilities/0.log" Dec 11 09:36:35 crc kubenswrapper[4992]: I1211 09:36:35.324747 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-utilities/0.log" Dec 11 09:36:35 crc kubenswrapper[4992]: I1211 09:36:35.341300 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-content/0.log" Dec 11 09:36:35 crc kubenswrapper[4992]: I1211 09:36:35.361046 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-content/0.log" Dec 11 09:36:35 crc kubenswrapper[4992]: I1211 09:36:35.564691 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-utilities/0.log" Dec 11 09:36:35 crc kubenswrapper[4992]: I1211 09:36:35.585819 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/extract-content/0.log" Dec 11 09:36:35 crc kubenswrapper[4992]: I1211 09:36:35.817028 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-utilities/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.073016 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-content/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.078549 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wxlmp_125324f4-c036-4fd9-aa27-4f9e5774b59e/registry-server/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.088425 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-content/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.131555 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-utilities/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.271973 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-content/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.329904 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/extract-utilities/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.482479 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-flz8s_f3229bda-a4f4-42ac-8936-829ab828fce4/marketplace-operator/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.553613 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-utilities/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.893524 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-content/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.905177 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-7s6x7_3369c4a5-b910-47d9-b2e5-92cedd6b0ef2/registry-server/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.912983 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-content/0.log" Dec 11 09:36:36 crc kubenswrapper[4992]: I1211 09:36:36.934344 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-utilities/0.log" Dec 11 09:36:37 crc kubenswrapper[4992]: I1211 09:36:37.076937 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-utilities/0.log" Dec 11 09:36:37 crc kubenswrapper[4992]: I1211 09:36:37.127236 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/extract-content/0.log" Dec 11 09:36:37 crc kubenswrapper[4992]: I1211 09:36:37.247047 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qqwws_9a2d8904-1dcb-4d16-82e8-9db4a8d986ef/registry-server/0.log" Dec 11 09:36:37 crc kubenswrapper[4992]: I1211 09:36:37.322013 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-utilities/0.log" Dec 11 09:36:37 crc kubenswrapper[4992]: I1211 09:36:37.446960 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-utilities/0.log" Dec 11 09:36:37 crc kubenswrapper[4992]: I1211 09:36:37.463758 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-content/0.log" Dec 11 09:36:37 crc kubenswrapper[4992]: I1211 09:36:37.503834 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-content/0.log" Dec 11 09:36:37 crc kubenswrapper[4992]: I1211 09:36:37.633014 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-content/0.log" Dec 11 09:36:37 crc kubenswrapper[4992]: I1211 09:36:37.654807 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/extract-utilities/0.log" Dec 11 09:36:38 crc kubenswrapper[4992]: I1211 09:36:38.270718 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-q748k_22fe6c50-eedd-446c-8475-80ecb4676613/registry-server/0.log" Dec 11 09:36:43 crc kubenswrapper[4992]: I1211 09:36:43.095276 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:36:43 crc kubenswrapper[4992]: E1211 09:36:43.096067 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:36:57 crc kubenswrapper[4992]: I1211 09:36:57.095308 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:36:57 crc kubenswrapper[4992]: E1211 09:36:57.096101 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:37:11 crc kubenswrapper[4992]: I1211 09:37:11.095616 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:37:11 crc kubenswrapper[4992]: E1211 09:37:11.096493 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:37:25 crc kubenswrapper[4992]: I1211 09:37:25.095612 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:37:25 crc kubenswrapper[4992]: E1211 09:37:25.096421 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.624519 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zpw7w"] Dec 11 09:37:32 crc kubenswrapper[4992]: E1211 09:37:32.625597 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerName="extract-content" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.625619 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerName="extract-content" Dec 11 09:37:32 crc kubenswrapper[4992]: E1211 09:37:32.625668 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerName="extract-utilities" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.625678 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerName="extract-utilities" Dec 11 09:37:32 crc kubenswrapper[4992]: E1211 09:37:32.625697 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerName="registry-server" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.625705 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerName="registry-server" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.625962 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc65adcd-c691-4128-b41d-b5cc820dfc1d" containerName="registry-server" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.629554 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.644523 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpw7w"] Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.808961 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjlf7\" (UniqueName: \"kubernetes.io/projected/e3542625-9c29-4ebd-af06-7788f32bd5f1-kube-api-access-zjlf7\") pod \"redhat-marketplace-zpw7w\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.809039 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-catalog-content\") pod \"redhat-marketplace-zpw7w\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.809145 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-utilities\") pod \"redhat-marketplace-zpw7w\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.911258 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-utilities\") pod \"redhat-marketplace-zpw7w\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.911398 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjlf7\" (UniqueName: \"kubernetes.io/projected/e3542625-9c29-4ebd-af06-7788f32bd5f1-kube-api-access-zjlf7\") pod \"redhat-marketplace-zpw7w\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.911444 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-catalog-content\") pod \"redhat-marketplace-zpw7w\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.912115 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-catalog-content\") pod \"redhat-marketplace-zpw7w\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.912127 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-utilities\") pod \"redhat-marketplace-zpw7w\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.938773 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjlf7\" (UniqueName: \"kubernetes.io/projected/e3542625-9c29-4ebd-af06-7788f32bd5f1-kube-api-access-zjlf7\") pod \"redhat-marketplace-zpw7w\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:32 crc kubenswrapper[4992]: I1211 09:37:32.952612 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:33 crc kubenswrapper[4992]: I1211 09:37:33.530899 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpw7w"] Dec 11 09:37:34 crc kubenswrapper[4992]: I1211 09:37:34.538806 4992 generic.go:334] "Generic (PLEG): container finished" podID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerID="7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871" exitCode=0 Dec 11 09:37:34 crc kubenswrapper[4992]: I1211 09:37:34.538980 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpw7w" event={"ID":"e3542625-9c29-4ebd-af06-7788f32bd5f1","Type":"ContainerDied","Data":"7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871"} Dec 11 09:37:34 crc kubenswrapper[4992]: I1211 09:37:34.539085 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpw7w" event={"ID":"e3542625-9c29-4ebd-af06-7788f32bd5f1","Type":"ContainerStarted","Data":"f29187968bade5edab74c2a8bc067ab024900eaf442a15bf6a8946b3533005da"} Dec 11 09:37:35 crc kubenswrapper[4992]: I1211 09:37:35.549362 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpw7w" event={"ID":"e3542625-9c29-4ebd-af06-7788f32bd5f1","Type":"ContainerStarted","Data":"757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20"} Dec 11 09:37:36 crc kubenswrapper[4992]: I1211 09:37:36.560730 4992 generic.go:334] "Generic (PLEG): container finished" podID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerID="757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20" exitCode=0 Dec 11 09:37:36 crc kubenswrapper[4992]: I1211 09:37:36.560781 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpw7w" event={"ID":"e3542625-9c29-4ebd-af06-7788f32bd5f1","Type":"ContainerDied","Data":"757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20"} Dec 11 09:37:38 crc kubenswrapper[4992]: I1211 09:37:38.095479 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:37:38 crc kubenswrapper[4992]: E1211 09:37:38.096152 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:37:38 crc kubenswrapper[4992]: I1211 09:37:38.589879 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpw7w" event={"ID":"e3542625-9c29-4ebd-af06-7788f32bd5f1","Type":"ContainerStarted","Data":"6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6"} Dec 11 09:37:38 crc kubenswrapper[4992]: I1211 09:37:38.614752 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zpw7w" podStartSLOduration=3.172960457 podStartE2EDuration="6.614727099s" podCreationTimestamp="2025-12-11 09:37:32 +0000 UTC" firstStartedPulling="2025-12-11 09:37:34.542298974 +0000 UTC m=+4478.801772900" lastFinishedPulling="2025-12-11 09:37:37.984065616 +0000 UTC m=+4482.243539542" observedRunningTime="2025-12-11 09:37:38.611806968 +0000 UTC m=+4482.871280914" watchObservedRunningTime="2025-12-11 09:37:38.614727099 +0000 UTC m=+4482.874201015" Dec 11 09:37:42 crc kubenswrapper[4992]: I1211 09:37:42.953928 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:42 crc kubenswrapper[4992]: I1211 09:37:42.954910 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:43 crc kubenswrapper[4992]: I1211 09:37:43.014663 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:43 crc kubenswrapper[4992]: I1211 09:37:43.679621 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:43 crc kubenswrapper[4992]: I1211 09:37:43.738713 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpw7w"] Dec 11 09:37:45 crc kubenswrapper[4992]: I1211 09:37:45.646239 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zpw7w" podUID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerName="registry-server" containerID="cri-o://6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6" gracePeriod=2 Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.123493 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.155358 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-catalog-content\") pod \"e3542625-9c29-4ebd-af06-7788f32bd5f1\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.155651 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-utilities\") pod \"e3542625-9c29-4ebd-af06-7788f32bd5f1\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.156482 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjlf7\" (UniqueName: \"kubernetes.io/projected/e3542625-9c29-4ebd-af06-7788f32bd5f1-kube-api-access-zjlf7\") pod \"e3542625-9c29-4ebd-af06-7788f32bd5f1\" (UID: \"e3542625-9c29-4ebd-af06-7788f32bd5f1\") " Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.156378 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-utilities" (OuterVolumeSpecName: "utilities") pod "e3542625-9c29-4ebd-af06-7788f32bd5f1" (UID: "e3542625-9c29-4ebd-af06-7788f32bd5f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.164357 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3542625-9c29-4ebd-af06-7788f32bd5f1-kube-api-access-zjlf7" (OuterVolumeSpecName: "kube-api-access-zjlf7") pod "e3542625-9c29-4ebd-af06-7788f32bd5f1" (UID: "e3542625-9c29-4ebd-af06-7788f32bd5f1"). InnerVolumeSpecName "kube-api-access-zjlf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.179770 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3542625-9c29-4ebd-af06-7788f32bd5f1" (UID: "e3542625-9c29-4ebd-af06-7788f32bd5f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.258343 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjlf7\" (UniqueName: \"kubernetes.io/projected/e3542625-9c29-4ebd-af06-7788f32bd5f1-kube-api-access-zjlf7\") on node \"crc\" DevicePath \"\"" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.258536 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.258618 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3542625-9c29-4ebd-af06-7788f32bd5f1-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.655798 4992 generic.go:334] "Generic (PLEG): container finished" podID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerID="6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6" exitCode=0 Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.655846 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpw7w" event={"ID":"e3542625-9c29-4ebd-af06-7788f32bd5f1","Type":"ContainerDied","Data":"6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6"} Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.655875 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zpw7w" event={"ID":"e3542625-9c29-4ebd-af06-7788f32bd5f1","Type":"ContainerDied","Data":"f29187968bade5edab74c2a8bc067ab024900eaf442a15bf6a8946b3533005da"} Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.655894 4992 scope.go:117] "RemoveContainer" containerID="6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.656043 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zpw7w" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.706462 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpw7w"] Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.710625 4992 scope.go:117] "RemoveContainer" containerID="757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.716681 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zpw7w"] Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.732057 4992 scope.go:117] "RemoveContainer" containerID="7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.782158 4992 scope.go:117] "RemoveContainer" containerID="6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6" Dec 11 09:37:46 crc kubenswrapper[4992]: E1211 09:37:46.782676 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6\": container with ID starting with 6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6 not found: ID does not exist" containerID="6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.782721 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6"} err="failed to get container status \"6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6\": rpc error: code = NotFound desc = could not find container \"6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6\": container with ID starting with 6d8be5287e9878311a4072650961b0d7d8900d9f6f5301f6a110012aa56c15f6 not found: ID does not exist" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.782749 4992 scope.go:117] "RemoveContainer" containerID="757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20" Dec 11 09:37:46 crc kubenswrapper[4992]: E1211 09:37:46.783404 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20\": container with ID starting with 757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20 not found: ID does not exist" containerID="757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.783442 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20"} err="failed to get container status \"757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20\": rpc error: code = NotFound desc = could not find container \"757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20\": container with ID starting with 757bcebded34d0f84beea33fbdb90f5c95615b18d51de1d47b86545ea003ca20 not found: ID does not exist" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.783490 4992 scope.go:117] "RemoveContainer" containerID="7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871" Dec 11 09:37:46 crc kubenswrapper[4992]: E1211 09:37:46.783946 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871\": container with ID starting with 7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871 not found: ID does not exist" containerID="7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871" Dec 11 09:37:46 crc kubenswrapper[4992]: I1211 09:37:46.783988 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871"} err="failed to get container status \"7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871\": rpc error: code = NotFound desc = could not find container \"7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871\": container with ID starting with 7bada8cc4ccf953ae50c0979710fc77690ea49858bb838aea53eb60115215871 not found: ID does not exist" Dec 11 09:37:48 crc kubenswrapper[4992]: I1211 09:37:48.111148 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3542625-9c29-4ebd-af06-7788f32bd5f1" path="/var/lib/kubelet/pods/e3542625-9c29-4ebd-af06-7788f32bd5f1/volumes" Dec 11 09:37:53 crc kubenswrapper[4992]: I1211 09:37:53.096182 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:37:53 crc kubenswrapper[4992]: E1211 09:37:53.097330 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:38:08 crc kubenswrapper[4992]: I1211 09:38:08.095348 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:38:08 crc kubenswrapper[4992]: E1211 09:38:08.096460 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:38:20 crc kubenswrapper[4992]: I1211 09:38:20.095318 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:38:20 crc kubenswrapper[4992]: E1211 09:38:20.096225 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:38:24 crc kubenswrapper[4992]: I1211 09:38:24.668058 4992 generic.go:334] "Generic (PLEG): container finished" podID="f01174cb-d167-4994-a981-e03257bf5af8" containerID="07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b" exitCode=0 Dec 11 09:38:24 crc kubenswrapper[4992]: I1211 09:38:24.668152 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rcjl7/must-gather-bd29j" event={"ID":"f01174cb-d167-4994-a981-e03257bf5af8","Type":"ContainerDied","Data":"07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b"} Dec 11 09:38:24 crc kubenswrapper[4992]: I1211 09:38:24.669091 4992 scope.go:117] "RemoveContainer" containerID="07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b" Dec 11 09:38:25 crc kubenswrapper[4992]: I1211 09:38:25.553135 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rcjl7_must-gather-bd29j_f01174cb-d167-4994-a981-e03257bf5af8/gather/0.log" Dec 11 09:38:33 crc kubenswrapper[4992]: I1211 09:38:33.096057 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:38:33 crc kubenswrapper[4992]: E1211 09:38:33.097174 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:38:35 crc kubenswrapper[4992]: I1211 09:38:35.780358 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rcjl7/must-gather-bd29j"] Dec 11 09:38:35 crc kubenswrapper[4992]: I1211 09:38:35.781030 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rcjl7/must-gather-bd29j" podUID="f01174cb-d167-4994-a981-e03257bf5af8" containerName="copy" containerID="cri-o://675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da" gracePeriod=2 Dec 11 09:38:35 crc kubenswrapper[4992]: I1211 09:38:35.790140 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rcjl7/must-gather-bd29j"] Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.274154 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rcjl7_must-gather-bd29j_f01174cb-d167-4994-a981-e03257bf5af8/copy/0.log" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.274870 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.439876 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01174cb-d167-4994-a981-e03257bf5af8-must-gather-output\") pod \"f01174cb-d167-4994-a981-e03257bf5af8\" (UID: \"f01174cb-d167-4994-a981-e03257bf5af8\") " Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.440011 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8s6t\" (UniqueName: \"kubernetes.io/projected/f01174cb-d167-4994-a981-e03257bf5af8-kube-api-access-j8s6t\") pod \"f01174cb-d167-4994-a981-e03257bf5af8\" (UID: \"f01174cb-d167-4994-a981-e03257bf5af8\") " Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.445448 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01174cb-d167-4994-a981-e03257bf5af8-kube-api-access-j8s6t" (OuterVolumeSpecName: "kube-api-access-j8s6t") pod "f01174cb-d167-4994-a981-e03257bf5af8" (UID: "f01174cb-d167-4994-a981-e03257bf5af8"). InnerVolumeSpecName "kube-api-access-j8s6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.543118 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8s6t\" (UniqueName: \"kubernetes.io/projected/f01174cb-d167-4994-a981-e03257bf5af8-kube-api-access-j8s6t\") on node \"crc\" DevicePath \"\"" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.590092 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f01174cb-d167-4994-a981-e03257bf5af8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f01174cb-d167-4994-a981-e03257bf5af8" (UID: "f01174cb-d167-4994-a981-e03257bf5af8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.644707 4992 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01174cb-d167-4994-a981-e03257bf5af8-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.770529 4992 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rcjl7_must-gather-bd29j_f01174cb-d167-4994-a981-e03257bf5af8/copy/0.log" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.770990 4992 generic.go:334] "Generic (PLEG): container finished" podID="f01174cb-d167-4994-a981-e03257bf5af8" containerID="675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da" exitCode=143 Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.771118 4992 scope.go:117] "RemoveContainer" containerID="675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.771374 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rcjl7/must-gather-bd29j" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.801193 4992 scope.go:117] "RemoveContainer" containerID="07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.867289 4992 scope.go:117] "RemoveContainer" containerID="675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da" Dec 11 09:38:36 crc kubenswrapper[4992]: E1211 09:38:36.867780 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da\": container with ID starting with 675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da not found: ID does not exist" containerID="675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.867811 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da"} err="failed to get container status \"675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da\": rpc error: code = NotFound desc = could not find container \"675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da\": container with ID starting with 675baf494708301b03861e5b26a3dbf8b13b79e3fae820429e27abd7a02670da not found: ID does not exist" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.867831 4992 scope.go:117] "RemoveContainer" containerID="07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b" Dec 11 09:38:36 crc kubenswrapper[4992]: E1211 09:38:36.868195 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b\": container with ID starting with 07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b not found: ID does not exist" containerID="07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b" Dec 11 09:38:36 crc kubenswrapper[4992]: I1211 09:38:36.868243 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b"} err="failed to get container status \"07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b\": rpc error: code = NotFound desc = could not find container \"07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b\": container with ID starting with 07c5a1a184b54443d03bfd78c21f4c958db5f035ec703b540dda36d83c7cf86b not found: ID does not exist" Dec 11 09:38:38 crc kubenswrapper[4992]: I1211 09:38:38.106094 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01174cb-d167-4994-a981-e03257bf5af8" path="/var/lib/kubelet/pods/f01174cb-d167-4994-a981-e03257bf5af8/volumes" Dec 11 09:38:46 crc kubenswrapper[4992]: I1211 09:38:46.102181 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:38:46 crc kubenswrapper[4992]: E1211 09:38:46.103433 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:38:58 crc kubenswrapper[4992]: I1211 09:38:58.096271 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:38:58 crc kubenswrapper[4992]: E1211 09:38:58.097178 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:39:13 crc kubenswrapper[4992]: I1211 09:39:13.096086 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:39:13 crc kubenswrapper[4992]: E1211 09:39:13.096795 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:39:27 crc kubenswrapper[4992]: I1211 09:39:27.096144 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:39:27 crc kubenswrapper[4992]: E1211 09:39:27.097052 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:39:39 crc kubenswrapper[4992]: I1211 09:39:39.094845 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:39:39 crc kubenswrapper[4992]: E1211 09:39:39.105090 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:39:51 crc kubenswrapper[4992]: I1211 09:39:51.095468 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:39:51 crc kubenswrapper[4992]: E1211 09:39:51.096306 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:40:06 crc kubenswrapper[4992]: I1211 09:40:06.102916 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:40:06 crc kubenswrapper[4992]: E1211 09:40:06.103871 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:40:19 crc kubenswrapper[4992]: I1211 09:40:19.716389 4992 scope.go:117] "RemoveContainer" containerID="36b679bb20493d184e0ff3774956f3d6e191923c9939b6525855975f6616a501" Dec 11 09:40:21 crc kubenswrapper[4992]: I1211 09:40:21.094999 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:40:21 crc kubenswrapper[4992]: E1211 09:40:21.095525 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:40:32 crc kubenswrapper[4992]: I1211 09:40:32.095553 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:40:32 crc kubenswrapper[4992]: E1211 09:40:32.096339 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:40:45 crc kubenswrapper[4992]: I1211 09:40:45.095454 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:40:45 crc kubenswrapper[4992]: E1211 09:40:45.096274 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:40:58 crc kubenswrapper[4992]: I1211 09:40:58.095700 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:40:58 crc kubenswrapper[4992]: E1211 09:40:58.096556 4992 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-m8b9c_openshift-machine-config-operator(fa42ae65-5fda-421e-b27a-6d8a0b2defb3)\"" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" podUID="fa42ae65-5fda-421e-b27a-6d8a0b2defb3" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.845692 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-49nt4"] Dec 11 09:41:06 crc kubenswrapper[4992]: E1211 09:41:06.849765 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerName="extract-content" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.849801 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerName="extract-content" Dec 11 09:41:06 crc kubenswrapper[4992]: E1211 09:41:06.849814 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01174cb-d167-4994-a981-e03257bf5af8" containerName="copy" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.849820 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01174cb-d167-4994-a981-e03257bf5af8" containerName="copy" Dec 11 09:41:06 crc kubenswrapper[4992]: E1211 09:41:06.849840 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerName="registry-server" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.849847 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerName="registry-server" Dec 11 09:41:06 crc kubenswrapper[4992]: E1211 09:41:06.849867 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01174cb-d167-4994-a981-e03257bf5af8" containerName="gather" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.849874 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01174cb-d167-4994-a981-e03257bf5af8" containerName="gather" Dec 11 09:41:06 crc kubenswrapper[4992]: E1211 09:41:06.849894 4992 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerName="extract-utilities" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.849904 4992 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerName="extract-utilities" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.850283 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3542625-9c29-4ebd-af06-7788f32bd5f1" containerName="registry-server" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.850308 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01174cb-d167-4994-a981-e03257bf5af8" containerName="copy" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.850326 4992 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01174cb-d167-4994-a981-e03257bf5af8" containerName="gather" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.851749 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.863940 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49nt4"] Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.947115 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn959\" (UniqueName: \"kubernetes.io/projected/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-kube-api-access-hn959\") pod \"redhat-operators-49nt4\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.947173 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-catalog-content\") pod \"redhat-operators-49nt4\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:06 crc kubenswrapper[4992]: I1211 09:41:06.947377 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-utilities\") pod \"redhat-operators-49nt4\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:07 crc kubenswrapper[4992]: I1211 09:41:07.049089 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-utilities\") pod \"redhat-operators-49nt4\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:07 crc kubenswrapper[4992]: I1211 09:41:07.049214 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn959\" (UniqueName: \"kubernetes.io/projected/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-kube-api-access-hn959\") pod \"redhat-operators-49nt4\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:07 crc kubenswrapper[4992]: I1211 09:41:07.049263 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-catalog-content\") pod \"redhat-operators-49nt4\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:07 crc kubenswrapper[4992]: I1211 09:41:07.049726 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-utilities\") pod \"redhat-operators-49nt4\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:07 crc kubenswrapper[4992]: I1211 09:41:07.049755 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-catalog-content\") pod \"redhat-operators-49nt4\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:07 crc kubenswrapper[4992]: I1211 09:41:07.070370 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn959\" (UniqueName: \"kubernetes.io/projected/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-kube-api-access-hn959\") pod \"redhat-operators-49nt4\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:07 crc kubenswrapper[4992]: I1211 09:41:07.178800 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:07 crc kubenswrapper[4992]: I1211 09:41:07.626776 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49nt4"] Dec 11 09:41:07 crc kubenswrapper[4992]: W1211 09:41:07.637878 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e783f1c_5b63_447c_87f3_8e3ccfa330ce.slice/crio-5bfb14f9e58ffdc80f0f9bba743df5884e728ff9ad716ef7e9a237e7bc758be9 WatchSource:0}: Error finding container 5bfb14f9e58ffdc80f0f9bba743df5884e728ff9ad716ef7e9a237e7bc758be9: Status 404 returned error can't find the container with id 5bfb14f9e58ffdc80f0f9bba743df5884e728ff9ad716ef7e9a237e7bc758be9 Dec 11 09:41:08 crc kubenswrapper[4992]: I1211 09:41:08.140101 4992 generic.go:334] "Generic (PLEG): container finished" podID="0e783f1c-5b63-447c-87f3-8e3ccfa330ce" containerID="04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02" exitCode=0 Dec 11 09:41:08 crc kubenswrapper[4992]: I1211 09:41:08.140174 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49nt4" event={"ID":"0e783f1c-5b63-447c-87f3-8e3ccfa330ce","Type":"ContainerDied","Data":"04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02"} Dec 11 09:41:08 crc kubenswrapper[4992]: I1211 09:41:08.140215 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49nt4" event={"ID":"0e783f1c-5b63-447c-87f3-8e3ccfa330ce","Type":"ContainerStarted","Data":"5bfb14f9e58ffdc80f0f9bba743df5884e728ff9ad716ef7e9a237e7bc758be9"} Dec 11 09:41:08 crc kubenswrapper[4992]: I1211 09:41:08.142595 4992 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 09:41:09 crc kubenswrapper[4992]: I1211 09:41:09.152379 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49nt4" event={"ID":"0e783f1c-5b63-447c-87f3-8e3ccfa330ce","Type":"ContainerStarted","Data":"2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583"} Dec 11 09:41:10 crc kubenswrapper[4992]: I1211 09:41:10.163904 4992 generic.go:334] "Generic (PLEG): container finished" podID="0e783f1c-5b63-447c-87f3-8e3ccfa330ce" containerID="2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583" exitCode=0 Dec 11 09:41:10 crc kubenswrapper[4992]: I1211 09:41:10.163981 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49nt4" event={"ID":"0e783f1c-5b63-447c-87f3-8e3ccfa330ce","Type":"ContainerDied","Data":"2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583"} Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.177056 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49nt4" event={"ID":"0e783f1c-5b63-447c-87f3-8e3ccfa330ce","Type":"ContainerStarted","Data":"6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5"} Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.199430 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-49nt4" podStartSLOduration=2.696191857 podStartE2EDuration="5.199412519s" podCreationTimestamp="2025-12-11 09:41:06 +0000 UTC" firstStartedPulling="2025-12-11 09:41:08.142248368 +0000 UTC m=+4692.401722294" lastFinishedPulling="2025-12-11 09:41:10.64546904 +0000 UTC m=+4694.904942956" observedRunningTime="2025-12-11 09:41:11.195132255 +0000 UTC m=+4695.454606181" watchObservedRunningTime="2025-12-11 09:41:11.199412519 +0000 UTC m=+4695.458886445" Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.426036 4992 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cz7q6"] Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.428602 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.439289 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cz7q6"] Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.546563 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-catalog-content\") pod \"certified-operators-cz7q6\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.546951 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8dl5\" (UniqueName: \"kubernetes.io/projected/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-kube-api-access-l8dl5\") pod \"certified-operators-cz7q6\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.547107 4992 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-utilities\") pod \"certified-operators-cz7q6\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.649363 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-catalog-content\") pod \"certified-operators-cz7q6\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.649811 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8dl5\" (UniqueName: \"kubernetes.io/projected/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-kube-api-access-l8dl5\") pod \"certified-operators-cz7q6\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.650033 4992 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-utilities\") pod \"certified-operators-cz7q6\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.650030 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-catalog-content\") pod \"certified-operators-cz7q6\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:11 crc kubenswrapper[4992]: I1211 09:41:11.650300 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-utilities\") pod \"certified-operators-cz7q6\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:12 crc kubenswrapper[4992]: I1211 09:41:12.263770 4992 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8dl5\" (UniqueName: \"kubernetes.io/projected/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-kube-api-access-l8dl5\") pod \"certified-operators-cz7q6\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:12 crc kubenswrapper[4992]: I1211 09:41:12.361243 4992 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:12 crc kubenswrapper[4992]: I1211 09:41:12.834274 4992 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cz7q6"] Dec 11 09:41:12 crc kubenswrapper[4992]: W1211 09:41:12.838523 4992 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07f0e6b8_a2c5_4d11_b1df_a996e5a850ee.slice/crio-51cd8d38a21405d2a79cca8e73f10b69e86f6c5fdf69d4cf476f66d167d32b76 WatchSource:0}: Error finding container 51cd8d38a21405d2a79cca8e73f10b69e86f6c5fdf69d4cf476f66d167d32b76: Status 404 returned error can't find the container with id 51cd8d38a21405d2a79cca8e73f10b69e86f6c5fdf69d4cf476f66d167d32b76 Dec 11 09:41:13 crc kubenswrapper[4992]: I1211 09:41:13.095992 4992 scope.go:117] "RemoveContainer" containerID="28e539b39e94bd0be89f20b888b90be3be3833d858eaa7d8bfaca2d7a5f28b43" Dec 11 09:41:13 crc kubenswrapper[4992]: I1211 09:41:13.198352 4992 generic.go:334] "Generic (PLEG): container finished" podID="07f0e6b8-a2c5-4d11-b1df-a996e5a850ee" containerID="80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041" exitCode=0 Dec 11 09:41:13 crc kubenswrapper[4992]: I1211 09:41:13.198394 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz7q6" event={"ID":"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee","Type":"ContainerDied","Data":"80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041"} Dec 11 09:41:13 crc kubenswrapper[4992]: I1211 09:41:13.198421 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz7q6" event={"ID":"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee","Type":"ContainerStarted","Data":"51cd8d38a21405d2a79cca8e73f10b69e86f6c5fdf69d4cf476f66d167d32b76"} Dec 11 09:41:14 crc kubenswrapper[4992]: I1211 09:41:14.210878 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-m8b9c" event={"ID":"fa42ae65-5fda-421e-b27a-6d8a0b2defb3","Type":"ContainerStarted","Data":"c8c5d007135d7c78b505148fde21afba9d0339c7a2eec4bdb4a061cd22ce0edc"} Dec 11 09:41:15 crc kubenswrapper[4992]: I1211 09:41:15.223304 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz7q6" event={"ID":"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee","Type":"ContainerStarted","Data":"e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278"} Dec 11 09:41:17 crc kubenswrapper[4992]: I1211 09:41:17.179808 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:17 crc kubenswrapper[4992]: I1211 09:41:17.180378 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:17 crc kubenswrapper[4992]: I1211 09:41:17.229822 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:17 crc kubenswrapper[4992]: I1211 09:41:17.323228 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:18 crc kubenswrapper[4992]: I1211 09:41:18.251321 4992 generic.go:334] "Generic (PLEG): container finished" podID="07f0e6b8-a2c5-4d11-b1df-a996e5a850ee" containerID="e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278" exitCode=0 Dec 11 09:41:18 crc kubenswrapper[4992]: I1211 09:41:18.251404 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz7q6" event={"ID":"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee","Type":"ContainerDied","Data":"e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278"} Dec 11 09:41:18 crc kubenswrapper[4992]: I1211 09:41:18.425358 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49nt4"] Dec 11 09:41:19 crc kubenswrapper[4992]: I1211 09:41:19.261492 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-49nt4" podUID="0e783f1c-5b63-447c-87f3-8e3ccfa330ce" containerName="registry-server" containerID="cri-o://6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5" gracePeriod=2 Dec 11 09:41:19 crc kubenswrapper[4992]: I1211 09:41:19.749364 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:19 crc kubenswrapper[4992]: I1211 09:41:19.864792 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn959\" (UniqueName: \"kubernetes.io/projected/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-kube-api-access-hn959\") pod \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " Dec 11 09:41:19 crc kubenswrapper[4992]: I1211 09:41:19.864941 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-catalog-content\") pod \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " Dec 11 09:41:19 crc kubenswrapper[4992]: I1211 09:41:19.864975 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-utilities\") pod \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\" (UID: \"0e783f1c-5b63-447c-87f3-8e3ccfa330ce\") " Dec 11 09:41:19 crc kubenswrapper[4992]: I1211 09:41:19.865997 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-utilities" (OuterVolumeSpecName: "utilities") pod "0e783f1c-5b63-447c-87f3-8e3ccfa330ce" (UID: "0e783f1c-5b63-447c-87f3-8e3ccfa330ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:41:19 crc kubenswrapper[4992]: I1211 09:41:19.874126 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-kube-api-access-hn959" (OuterVolumeSpecName: "kube-api-access-hn959") pod "0e783f1c-5b63-447c-87f3-8e3ccfa330ce" (UID: "0e783f1c-5b63-447c-87f3-8e3ccfa330ce"). InnerVolumeSpecName "kube-api-access-hn959". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:41:19 crc kubenswrapper[4992]: I1211 09:41:19.967510 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:41:19 crc kubenswrapper[4992]: I1211 09:41:19.967547 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn959\" (UniqueName: \"kubernetes.io/projected/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-kube-api-access-hn959\") on node \"crc\" DevicePath \"\"" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.283069 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz7q6" event={"ID":"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee","Type":"ContainerStarted","Data":"bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589"} Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.288610 4992 generic.go:334] "Generic (PLEG): container finished" podID="0e783f1c-5b63-447c-87f3-8e3ccfa330ce" containerID="6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5" exitCode=0 Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.288690 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49nt4" event={"ID":"0e783f1c-5b63-447c-87f3-8e3ccfa330ce","Type":"ContainerDied","Data":"6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5"} Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.288728 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49nt4" event={"ID":"0e783f1c-5b63-447c-87f3-8e3ccfa330ce","Type":"ContainerDied","Data":"5bfb14f9e58ffdc80f0f9bba743df5884e728ff9ad716ef7e9a237e7bc758be9"} Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.288751 4992 scope.go:117] "RemoveContainer" containerID="6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.288869 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49nt4" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.311972 4992 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cz7q6" podStartSLOduration=3.262244855 podStartE2EDuration="9.311926304s" podCreationTimestamp="2025-12-11 09:41:11 +0000 UTC" firstStartedPulling="2025-12-11 09:41:13.200104182 +0000 UTC m=+4697.459578108" lastFinishedPulling="2025-12-11 09:41:19.249785631 +0000 UTC m=+4703.509259557" observedRunningTime="2025-12-11 09:41:20.307923757 +0000 UTC m=+4704.567397683" watchObservedRunningTime="2025-12-11 09:41:20.311926304 +0000 UTC m=+4704.571400230" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.316657 4992 scope.go:117] "RemoveContainer" containerID="2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.342597 4992 scope.go:117] "RemoveContainer" containerID="04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.366448 4992 scope.go:117] "RemoveContainer" containerID="6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5" Dec 11 09:41:20 crc kubenswrapper[4992]: E1211 09:41:20.379796 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5\": container with ID starting with 6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5 not found: ID does not exist" containerID="6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.379843 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5"} err="failed to get container status \"6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5\": rpc error: code = NotFound desc = could not find container \"6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5\": container with ID starting with 6140f4b825b62a275eeb6b21f173e2fb002c8a2911aae1b6239cc23b7d039ba5 not found: ID does not exist" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.379874 4992 scope.go:117] "RemoveContainer" containerID="2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583" Dec 11 09:41:20 crc kubenswrapper[4992]: E1211 09:41:20.384979 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583\": container with ID starting with 2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583 not found: ID does not exist" containerID="2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.385010 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583"} err="failed to get container status \"2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583\": rpc error: code = NotFound desc = could not find container \"2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583\": container with ID starting with 2e633f5c3629c8166fde105e684ee08ad3763179c7b08c44f443002b0f054583 not found: ID does not exist" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.385030 4992 scope.go:117] "RemoveContainer" containerID="04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02" Dec 11 09:41:20 crc kubenswrapper[4992]: E1211 09:41:20.385308 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02\": container with ID starting with 04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02 not found: ID does not exist" containerID="04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02" Dec 11 09:41:20 crc kubenswrapper[4992]: I1211 09:41:20.385338 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02"} err="failed to get container status \"04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02\": rpc error: code = NotFound desc = could not find container \"04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02\": container with ID starting with 04a71b415821fb20856838b192fd32e46517ea60119d48d4ef6d49d5bc0c6f02 not found: ID does not exist" Dec 11 09:41:22 crc kubenswrapper[4992]: I1211 09:41:22.361494 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:22 crc kubenswrapper[4992]: I1211 09:41:22.362615 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:22 crc kubenswrapper[4992]: I1211 09:41:22.709706 4992 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:23 crc kubenswrapper[4992]: I1211 09:41:23.458714 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e783f1c-5b63-447c-87f3-8e3ccfa330ce" (UID: "0e783f1c-5b63-447c-87f3-8e3ccfa330ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:41:23 crc kubenswrapper[4992]: I1211 09:41:23.546048 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e783f1c-5b63-447c-87f3-8e3ccfa330ce-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:41:23 crc kubenswrapper[4992]: I1211 09:41:23.628867 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49nt4"] Dec 11 09:41:23 crc kubenswrapper[4992]: I1211 09:41:23.637088 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-49nt4"] Dec 11 09:41:24 crc kubenswrapper[4992]: I1211 09:41:24.105373 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e783f1c-5b63-447c-87f3-8e3ccfa330ce" path="/var/lib/kubelet/pods/0e783f1c-5b63-447c-87f3-8e3ccfa330ce/volumes" Dec 11 09:41:24 crc kubenswrapper[4992]: I1211 09:41:24.616932 4992 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:26 crc kubenswrapper[4992]: I1211 09:41:26.615965 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cz7q6"] Dec 11 09:41:26 crc kubenswrapper[4992]: I1211 09:41:26.616513 4992 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cz7q6" podUID="07f0e6b8-a2c5-4d11-b1df-a996e5a850ee" containerName="registry-server" containerID="cri-o://bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589" gracePeriod=2 Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.048688 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.218567 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-catalog-content\") pod \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.218953 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-utilities\") pod \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.219236 4992 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8dl5\" (UniqueName: \"kubernetes.io/projected/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-kube-api-access-l8dl5\") pod \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\" (UID: \"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee\") " Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.219868 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-utilities" (OuterVolumeSpecName: "utilities") pod "07f0e6b8-a2c5-4d11-b1df-a996e5a850ee" (UID: "07f0e6b8-a2c5-4d11-b1df-a996e5a850ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.225146 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-kube-api-access-l8dl5" (OuterVolumeSpecName: "kube-api-access-l8dl5") pod "07f0e6b8-a2c5-4d11-b1df-a996e5a850ee" (UID: "07f0e6b8-a2c5-4d11-b1df-a996e5a850ee"). InnerVolumeSpecName "kube-api-access-l8dl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.268165 4992 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07f0e6b8-a2c5-4d11-b1df-a996e5a850ee" (UID: "07f0e6b8-a2c5-4d11-b1df-a996e5a850ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.322165 4992 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8dl5\" (UniqueName: \"kubernetes.io/projected/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-kube-api-access-l8dl5\") on node \"crc\" DevicePath \"\"" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.322457 4992 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.322549 4992 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.378538 4992 generic.go:334] "Generic (PLEG): container finished" podID="07f0e6b8-a2c5-4d11-b1df-a996e5a850ee" containerID="bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589" exitCode=0 Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.378599 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz7q6" event={"ID":"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee","Type":"ContainerDied","Data":"bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589"} Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.378645 4992 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cz7q6" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.378851 4992 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz7q6" event={"ID":"07f0e6b8-a2c5-4d11-b1df-a996e5a850ee","Type":"ContainerDied","Data":"51cd8d38a21405d2a79cca8e73f10b69e86f6c5fdf69d4cf476f66d167d32b76"} Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.378975 4992 scope.go:117] "RemoveContainer" containerID="bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.405528 4992 scope.go:117] "RemoveContainer" containerID="e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.413958 4992 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cz7q6"] Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.421729 4992 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cz7q6"] Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.427012 4992 scope.go:117] "RemoveContainer" containerID="80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.477038 4992 scope.go:117] "RemoveContainer" containerID="bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589" Dec 11 09:41:27 crc kubenswrapper[4992]: E1211 09:41:27.477626 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589\": container with ID starting with bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589 not found: ID does not exist" containerID="bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.477700 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589"} err="failed to get container status \"bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589\": rpc error: code = NotFound desc = could not find container \"bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589\": container with ID starting with bf57e1681bc4f3ca5c57907f5cab61b13cbeeeffc671db8a17b64d6d562e1589 not found: ID does not exist" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.477761 4992 scope.go:117] "RemoveContainer" containerID="e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278" Dec 11 09:41:27 crc kubenswrapper[4992]: E1211 09:41:27.478375 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278\": container with ID starting with e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278 not found: ID does not exist" containerID="e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.478407 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278"} err="failed to get container status \"e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278\": rpc error: code = NotFound desc = could not find container \"e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278\": container with ID starting with e03d4d847e2b9c041500b46f8673d7fe1898e0885639750cd8d53ccfbaeae278 not found: ID does not exist" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.478431 4992 scope.go:117] "RemoveContainer" containerID="80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041" Dec 11 09:41:27 crc kubenswrapper[4992]: E1211 09:41:27.478790 4992 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041\": container with ID starting with 80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041 not found: ID does not exist" containerID="80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041" Dec 11 09:41:27 crc kubenswrapper[4992]: I1211 09:41:27.478817 4992 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041"} err="failed to get container status \"80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041\": rpc error: code = NotFound desc = could not find container \"80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041\": container with ID starting with 80dc4a3f03272ef671dcd2523f9091cc0050a7535d107af314af0909de883041 not found: ID does not exist" Dec 11 09:41:28 crc kubenswrapper[4992]: I1211 09:41:28.108958 4992 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f0e6b8-a2c5-4d11-b1df-a996e5a850ee" path="/var/lib/kubelet/pods/07f0e6b8-a2c5-4d11-b1df-a996e5a850ee/volumes"